4
An algorithm finding roots of algebraic polynomial based on dynamic PID neurons Yang Xu-li Railway Traffic Department Hunan Railway Professional Technology College Zhuzhou , China e-mail: [email protected] Zeng Zhe-zhao College of Electrical and Information Engineering Changsha University of Science and Technology Changsha , China e-mail: [email protected] Abstract— In this paper, we proposed an algorithm finding roots of algebraic polynomial based on PID (Proportional- Integral-Derivative) neuron network, which the hidden layer of PID neural network was composed by proportion, integral and differential neurons. The PID neural network’s input was error signal between the value of polynomial ) ( x f at k x and the given value 0, and its output was the root to find. The specific examples illustrated that the proposed method can find the roots of polynomials with less computation, high accuracy and rapid convergence. Keywords- neural network; algebraic polynomial; roots; PID neurons; algorithm I. INTRODUCTION To find fast and accurately the roots of polynomials is an important problem in various areas of control and communication systems engineering, signal processing and in many other areas of science and technology [1-4]. Over the last decades, there exist a large number of different methods for finding all polynomial roots. Most of them yield accurate results only for small degree or can treat only special polynomials. So far, some better modified methods finding roots of polynomials cover mainly the Jenkins/Traub method [5], the Markus/Frenzel method [4], the Laguerre method [6], the Routh method [7], the Truong, Jeng and Reed method [8], the Fedorenko method [9], the Halley method[10], and some modified Newton’s methods[11-13], etc. Furthermore, Gyurhan H. Nedzhibov and Milko G. Petkov proposed a family of iterative methods for simultaneous extraction of all roots of algebraic polynomial [14]. Unlike all other known third order or higher order simultaneous methods, the method is not required to compute first or higher derivatives of the function to carry out iterations. In this paper, we proposed an algorithm finding roots of polynomial based on dynamic PID neurons. The approach can find roots of polynomial with less computation, higher accuracy and rapider convergence. II. THE ALGORITHM FINDING ROOTS OF POLYNOMIAL A. The algorithm description We start by defining our typical polynomial of degree n as 0 1 1 1 ) ( a x a x a x a x f n n n n + + + + = " (1) Here given the coefficients, i a ( 0 n a ). Usually, in science and engineering applications, the coefficients will all be real, and then the zeros will either be real or else occur in conjugate-complex pairs. The main essence of the algorithm proposed is to obtain roots of polynomials by training the proportional coefficient, integral coefficient and derivative coefficient of PID neuron network. The model of PID neuron network finding roots of polynomials is as follows. We assume that x is a root of the polynomial (1) and 0 x is an initial guess close to x . The algorithm model finding roots of polynomials based on PID neurons is indicated in Fig.1. We know from Fig.1 that the model of the PID neuron network is composed by three layers of neurons ( 1 3 1 × × ), which each neuron has an input, a state and an output. The state ) 1 ( s of the input layer neuron is decided by the proportion-threshold function. Both its input and its output are decided by the proportion function, i.e. The input ) ( ) 1 ( k neti of the input layer neuron is ) ( ) ( ) 1 ( k e k neti = (2) The state ) ( ) 1 ( k s of the input layer neuron is < > = 1 ) ( , 1 1 ) ( 1 ), ( 1 ) ( , 1 )) ( ( ) 1 ( k e k e k e k e k e s (3) The output ) ( ) 1 ( k neto of the input layer neuron is )) ( ( ) ( ) 1 ( ) 1 ( k e s k neto = (4) 2009 Third International Symposium on Intelligent Information Technology Application 978-0-7695-3859-4/09 $26.00 © 2009 IEEE DOI 10.1109/IITA.2009.297 616 2009 Third International Symposium on Intelligent Information Technology Application 978-0-7695-3859-4/09 $26.00 © 2009 IEEE DOI 10.1109/IITA.2009.297 616

[IEEE 2009 Third International Symposium on Intelligent Information Technology Application - NanChang, China (2009.11.21-2009.11.22)] 2009 Third International Symposium on Intelligent

  • Upload
    zeng

  • View
    217

  • Download
    4

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 Third International Symposium on Intelligent Information Technology Application - NanChang, China (2009.11.21-2009.11.22)] 2009 Third International Symposium on Intelligent

An algorithm finding roots of algebraic polynomial based on dynamic PID neurons

Yang Xu-li Railway Traffic Department

Hunan Railway Professional Technology College Zhuzhou , China

e-mail: [email protected]

Zeng Zhe-zhao College of Electrical and Information Engineering Changsha University of Science and Technology

Changsha , China e-mail: [email protected]

Abstract— In this paper, we proposed an algorithm finding roots of algebraic polynomial based on PID (Proportional-Integral-Derivative) neuron network, which the hidden layer of PID neural network was composed by proportion, integral and differential neurons. The PID neural network’s input was error signal between the value of polynomial )(xf at kx and the given value 0, and its output was the root to find. The specific examples illustrated that the proposed method can find the roots of polynomials with less computation, high accuracy and rapid convergence.

Keywords- neural network; algebraic polynomial; roots; PID neurons; algorithm

I. INTRODUCTION To find fast and accurately the roots of polynomials is an

important problem in various areas of control and communication systems engineering, signal processing and in many other areas of science and technology [1-4]. Over the last decades, there exist a large number of different methods for finding all polynomial roots. Most of them yield accurate results only for small degree or can treat only special polynomials.

So far, some better modified methods finding roots of polynomials cover mainly the Jenkins/Traub method [5], the Markus/Frenzel method [4], the Laguerre method [6], the Routh method [7], the Truong, Jeng and Reed method [8], the Fedorenko method [9], the Halley method[10], and some modified Newton’s methods[11-13], etc. Furthermore, Gyurhan H. Nedzhibov and Milko G. Petkov proposed a family of iterative methods for simultaneous extraction of all roots of algebraic polynomial [14]. Unlike all other known third order or higher order simultaneous methods, the method is not required to compute first or higher derivatives of the function to carry out iterations.

In this paper, we proposed an algorithm finding roots of polynomial based on dynamic PID neurons. The approach can find roots of polynomial with less computation, higher accuracy and rapider convergence.

II. THE ALGORITHM FINDING ROOTS OF POLYNOMIAL

A. The algorithm description We start by defining our typical polynomial of degree n

as

011

1)( axaxaxaxf nn

nn ++++= −

− (1)

Here given the coefficients, ia ( 0≠na ). Usually, in science and engineering applications, the coefficients will all be real, and then the zeros will either be real or else occur in conjugate-complex pairs. The main essence of the algorithm proposed is to obtain roots of polynomials by training the proportional coefficient, integral coefficient and derivative coefficient of PID neuron network. The model of PID neuron network finding roots of polynomials is as follows.

We assume that x is a root of the polynomial (1) and 0x is an initial guess close to x . The algorithm model finding roots of polynomials based on PID neurons is indicated in Fig.1. We know from Fig.1 that the model of the PID neuron network is composed by three layers of neurons ( 131 ×× ), which each neuron has an input, a state and an output. The state )1(s of the input layer neuron is decided by the proportion-threshold function. Both its input and its output are decided by the proportion function, i.e. The input )()1( kneti of the input layer neuron is

)()()1( kekneti = (2)

The state )()1( ks of the input layer neuron is

⎪⎩

⎪⎨

−<−≤≤−

>=

1)(,11)(1),(

1)(,1))(()1(

kekeke

kekes (3)

The output )()1( kneto of the input layer neuron is

))(()( )1()1( keskneto = (4)

2009 Third International Symposium on Intelligent Information Technology Application

978-0-7695-3859-4/09 $26.00 © 2009 IEEE

DOI 10.1109/IITA.2009.297

616

2009 Third International Symposium on Intelligent Information Technology Application

978-0-7695-3859-4/09 $26.00 © 2009 IEEE

DOI 10.1109/IITA.2009.297

616

Page 2: [IEEE 2009 Third International Symposium on Intelligent Information Technology Application - NanChang, China (2009.11.21-2009.11.22)] 2009 Third International Symposium on Intelligent

Fig.1 The model of the PID neural network

In the hidden layer neurons, the state )2(1s is decided by the

proportion function, the state )2(2s is decided by the integral

function, and the state )2(3s is decided by the derivative

function. Hence, the state functions are respectively defined as follows.

)()( )2(1

)2(1 knetiks = (5)

)()1()( )2(2

)2(2

)2(2 knetiksks +−= (6)

)1()()( )2(3

)2(3

)2(3 −−= knetiknetiks (7)

Where, )3,2,1(,)()2( =jkneti j respectively denotes the input of the jth hidden layer neuron, and

)3,2,1(,)()( )1()2( == jknetokneti j (8) Besides, the output of the hidden layer neurons is decided by the proportion-threshold function, i.e.

)3,2,1(,11)(,1)(1),(

1)(,1)(

)2(

)2()2(

)2(

)2( =≤⎪⎩

⎪⎨

−<−≤−

>= j

ksksks

kskneto

j

jj

j

j

(9) In the output layer of neural network, the input, the state and the output are respectively as follows.

∑=

=3

1

)2()3( )()(j

jj knetowkneti (10)

)()( )3()3( knetiks = (11)

⎪⎩

⎪⎨

−<−<<−

>=

1)(,11)(1),(

1)(,1)(

)3(

)3()3(

)3(

)3(

ksksks

kskneto (12)

and

⎩⎨⎧

=≠+

=0)0(),(0)0(),())0((

)( )3(

)3(

xknetoxknetoxsigna

kx

(13) Here, )(kx is the approximate root of polynomial )(xf

at the kth iteration, and the coefficient a is defined by the

scope of root of polynomial. Such as 5=x is a root of polynomial, then )(4)( )3( knetokx += , and 4−=x ,

then )(3)( )3( knetokx −−= .

B. The algorithm description of the dynamic PID neurons The error function and objective function are respectively

defined as follows: ))(()(0)( kxfkyke −=−= (14)

)(21 2 keJ = (15)

The weight jw in Formula (10) is recursively adjusted using the steepest descent method, i.e.

)()()1(

kwJkwkwj

jj ∂∂−=+ μ (16)

We have from the formula (10) to (15)

)()(

)()(

)()(

)()(

)())((

))(()(

)()()3(

)3(

)3(

)3(

)3(

)3(

kwkneti

knetiks

kskneto

knetokx

kxkxf

kxfke

keJ

kwJ

j

j

∂∂

∂∂

∂∂

∂∂

∂∂

∂∂

∂∂=

∂∂

(17)

Since, )()(

keke

J =∂

∂, 1

))(()( −=

∂∂

kxfke

,

))(()())(( kxf

kxkxf ′=

∂∂

, 1)(

)()3( =

∂∂

knetokx

,

1)(

)()3(

)3(

=∂

∂ks

kneto, 1

)()(

)3(

)3(

=∂

∂kneti

ks, and

)()(

)( )2()3(

knetokw

knetij

j

=∂

∂. Hence, Formula (17) was

simplified as

))(()()()(

)2( kxfknetokekw

Jj

j

′−=∂

∂ (18)

We obtained from the formula (16) and (18) ))(()()()()1( )2( kxfknetokekwkw jjj ′+=+ μ

(19) ( 3,2,1=j ). Here, μ is learning rate, and 10 << μ .

The weight )3,2,1(, =jwj of the dynamic PID neural network is normalized as

∑=

++=+3

1

)1(/)1()1(ˆj

jjj kwkwkw (20)

Then the formula (10) was rewritten as

617617

Page 3: [IEEE 2009 Third International Symposium on Intelligent Information Technology Application - NanChang, China (2009.11.21-2009.11.22)] 2009 Third International Symposium on Intelligent

∑=

=3

1

)2()3( )(ˆ)(j

jj knetowkneti (21)

C. Algorithm steps To find a solution to 0)( =xf given one initial

approximation )0(x . Step1. Given the initial value )0(x , determined learning

rate 10 << μ , let weight )3,2,1(,0 == jwj , and

given arbitrarily small positive real number: Tol . Let 0)0()2(

2 =s , 0)0()2(3 =neti . Given

coefficient a ; Step2. Computed respectively the error )(ke and the

objective value J using formula (14) and (15):

))(()( kxfke −= , )(21 2 keJ = ;

Calculated the differential ))(( kxf ′ of polynomial )(xf ;

Step3. Computed

)3,2,1(

11)(,1)(1),(

1)(,1)(

)2(

)2()2(

)2(

)2(

=

≤⎪⎩

⎪⎨

−<−≤−

>=

j

ksksks

kskneto

j

jj

j

j

using formula (9); Step4. Adjusted weights: )3,2,1(, =jwj using formula

(19), and normalized weights: )3,2,1(,ˆ =jwj using formula (20);

Step5. Computed the root )(kx of polynomial )(xf from the formula (10) to (13);

Step6. If TolJ ≤ , stop training of the PID neurons, and output the root )(kx of polynomial )(xf , else, returned the step2, and repeated the above process.

III. NUMERICAL EXAMPLES We have done numerical experiments with different

functions and initial approximations. All programs were realized in MATLAB6.5. We compare the observed iterative methods on the following criterions: number of iterations and absolute error.

Example 1: Consider the algebraic polynomial [14]: )9)(7)(5)(1)(5()( −−−++= xxxxxxf

In the proposed method, let a be equal to - 4, 0, 4, 6, 10, respectively. Let learning rate 01.0=μ . The table 1 shows the results of the method proposed and the method [14]. Where, ]9,7,5,1,5[ −−=p .

Example 2: Consider the algebraic polynomial [14]

1)2045cos(2045

23

23

−−−++

−−+=

xxxxxxf

In the proposed method, let a be equal to - 4, -1, 3, respectively. Let learning rate 01.0=μ . The table 2 shows the results of the method proposed and the method [14], where T]2,2,5[ −−=p .

TABLE I. THE RESULTS OF THE EXAMPLE 1

methods Initial 0x

Iter. N

kx px −k

Algorithm proposed

[-5.5,-1.4,4.6,6.6,9.

4] 1 [-5,-1,5,7,9] 0.0 (exact)

[-5.7,-1.8,4.1,6.2,9.

8] 1 [-5,-1,5,7,9] 0.0 (exact)

Formula (8) Formula (9)

Formula (10) [14]

[-5.5,-1.4,4.6,6.6,9.

4]

3 [-5,-1,5,7,9] 0.0 (exact)

3 [-5,-1,5,7,9] 2.8e-14

3 [-5,-1,5,7,9] 6.7e-14

[-5.7,-1.8,4.1,6.2,9.

8]

3 [-5,-1,5,7,9] 7.4e-09 4 [-5,-1,5,7,9] 0.0 (exact) 4 [-5,-1,5,7,9] 0.0 (exact)

MD [14]

[-5.5,-1.4,4.6,6.6,9.

4] 4 [-5,-1,5,7,9] 3.1e-10

[-5.7,-1.8,4.1,6.2,9.

8] 5 [-5,-1,5,7,9] 7.3e-13

EA[14]

[-5.5,-1.4,4.6,6.6,9.

4] 3 [-5,-1,5,7,9] 0.0 (exact)

[-5.7,-1.8,4.1,6.2,9.

8] 4 [-5,-1,5,7,9] 0.0 (exact)

TABLE II. THE RESULTS OF THE EXAMPLE 2

methods Initial 0x

Iter. N

kx px −k

Algorithm proposed [-5.1, -1.9,2.2] 1 [-5,-2,2] 0.00

Formula (8)

Formula (9)

Formula (10) [14]

[-5.1, -1.9, 2.1]

4 [-5,-2,2] 1.8e-08 4

[-5,-2,2]

2.0e-13

5 [-5,-2,2] 8.4e-02

MD [14] [-5.1, -1.9, 2.1] 6 [-5,-2,2] 2.7e-05 EA[14] [-5.1, -1.9, 2.1] 4 [-5,-2,2] 5.9e+02

IV. CONCLUSIONS We can see from the table 1 to table 2 that the approach

proposed can obtain the exact results through training neural network for only once. The results in table 1 and table 2

618618

Page 4: [IEEE 2009 Third International Symposium on Intelligent Information Technology Application - NanChang, China (2009.11.21-2009.11.22)] 2009 Third International Symposium on Intelligent

illustrated that the method proposed have much higher accuracy and less computation than the other methods in reference [14]. Hence, the algorithm proposed is a very effective method

REFERENCES

[1] Richard L. Burden, J. Douglas Faires. Numerical ANALYSIS (Seventh Edition).Thomson Learning, Inc. Aug. 2001,pp47-103.

[2] Zeng Zhe-zhao, Wen Hui. Numerical Computation (First Edition). Beijing: Qinghua University Press,China, Sept. 2005,pp88-108.

[3] Xu Changfa, Wang Minmin and Wang Ninghao. An accelerated iteration solution to nonlinear equation in large scope. J. Huazhong Univ. of Sci. & Tech.(Nature Science Edition), 34(4):122-124, 2006.

[4] Markus Lang and Bernhard-Christian Frenzel. Polynomial root finding. IEEE Signal Processing Letters, 1(10):141-143, Oct. 1994.

[5] Jenkins, M. A. and J. F. Traub. Algorithm 493 zeros of a real polynomial. ACM Trans. Math. Software, vol.1, p.178, June 1975.

[6] H.J. Orchard. The Laguerre method for finding the zeros of polynomials. IEEE Trans. On circuits and Systems, 36(11):1377-1381, Nov. 1989.

[7] T.N. Lucas. Finding roots of polynomials by using the Routh array. IEEE Electronics Letters, 32(16):1519-1521, Aug. 1996.

[8] T.K. Truong, J.H. Jeng, and I.S. Reed. Fast algorithm for computing the roots of error locator polynomials up to degree 11 in Reed-Solomon decoders. IEEE Trans. Commun., vol.49, pp.779-783, May 2001.

[9] Sergei V. Fedorenko, Peter V. Trifonov. Finding roots of polynomials over finite fields. IEEE Trans. Commun. 50(11):1709-1711, Nov. 2002.

[10] Cui Xiang-zhao, Yang Da-di and Long Yao. The fast Halley algorithm for finding all zeros of a polynomial. Chinese Journal of engineering mathematics, 23(3):511-517, June 2006.

[11] Ehrlich L.W. A modified Newton method for polynomials. Comm ACM, 10: 107-108, 1967.

[12] Huang Qing-long. An improvement on a modified Newton method. Numerical mathematics: A Journal of Chinese Universities, 11(4):313-319, Dec. 2002.

[13] Huang Qing-long, Wu Jiancheng. On a modified Newton method for simultaneous finding polynomial zeros. Journal on Numerical methods and computer applications(Beijing, China), 28(4):292-298, Dec. 2006.

[14] Gyurhan H. Nedzhibov and Milko G. Petkov. A family of iterative methods for simultaneous extraction of all roots of algebraic polynomial. Applied Mathematics and Computation, 162(2005)427-433.

619619