View
3
Download
0
Category
Preview:
Citation preview
1
© Ivo Bukovsky
CTU in Prague, FME
1( )
( )
( )
i
n
t
t
t
u
u
u
M
M
g(x(t))( )tdx
d t
( )ty
( )dt∫f(x(t),u(t))
( )tx
( )t =u
system
output
T0
Ti≥ 0 for i=0…n and Tfi ≥ 0 for i=1…m,
τ ≥0 , τmin >0, wij=0 for i>n+1 and j ≥1
( )j tu
( )n tu
( )( )tξφ( )ξ t y(t)
( )1 tu
1
s
Tf1
TmD3-DQNU
u0 … a constant neural bias
Tj
Tn
T1
0u
0
1
1
ξ ξ ξ
M
M
M
M
j
n
j
m
u
u
u
u
xa
Tfm
Tfj
M
M
0
n m n m
iji j
i j i
wx x+ +
= =
∑∑M
M M
M
11( )fTtξ −
( )fjj Ttξ −
( )fmm Ttξ −
New Neural Architectures
and
New Adaptive Evaluation of Chaotic Time Series
Tutorial for the 2008-IEEE-ICAL
2
© Ivo Bukovsky
CTU in Prague, FME
New Neural Architectures
and
New Adaptive Evaluation of Chaotic Time Series
Tutorial for the 2008-IEEE-ICAL Qingdao, China
Ivo Bukovsky, Jiri Bila, Madan M. Gupta, & Zeng-Guang Hou
3
© Ivo Bukovsky
CTU in Prague, FME
Ivo Bukovský, Jiří Bíla
Ú12110.3 Department of Instrumentation and Control Engineering
Faculty of Mechanical Engineering
CZECH TECHNICAL UNIVERSITY IN PRAGUE
New Neural Architectures&
New Adaptive Evaluation of Chaotic Time Series
4
© Ivo Bukovsky
CTU in Prague, FME
Madan M. Gupta
Intelligent Systems Research Laboratory
College of EngineeringUNIVERSITY OF SASKATCHEWAN
New Neural Architectures&
New Adaptive Evaluation of Chaotic Time Series
5
© Ivo Bukovsky
CTU in Prague, FME
Zeng-Guang Hou
Key Laboratory of Complex Systems and Intelligence Science
Institute of Automation
The Chinese Academy of Sciences
New Neural Architectures&
New Adaptive Evaluation of Chaotic Time Series
6
© Ivo Bukovsky
CTU in Prague, FME
TUTORIAL OUTLINE
1. Introduction- Conventional Neural Units (Neurons) and Networks
- Motivation for the development of Nonconventional Neural Units
2. Development of New Neural Architectures (Units)
- Higher-Order Nonlinear Neural Units (QNU CNU HONNU )• Static HONNU
• Dynamic HONNU
- Time-Delay Dynamic Neural Units (TmD-DNU)
- Time-Delay Dynamic Higher-Order Nonlinear Neural Units
(TmD-HONNU)
⊂ ⊆
7
© Ivo Bukovsky
CTU in Prague, FME
TUTORIAL OUTLINE (cont.)
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
8
© Ivo Bukovsky
CTU in Prague, FME
1. Introduction- Conventional Neural Units (Neurons) and Networks
- Motivation for the development of Nonconventional Neural Units
2. Development of New Neural Architectures (Units)
- Higher-Order Nonlinear Neural Units (QNU CNU HONNU )-- Static HONNU
-- Dynamic HONNU
- Time-Delay Dynamic Neural Units (TmD-DNU)
- Time-Delay Dynamic Higher-Order Nonlinear Neural Units
(TmD-HONNU)
⊂ ⊆
1. Introduction
- Conventional Neural Units (Neurons)
9
© Ivo Bukovsky
CTU in Prague, FME
Synaptic operation of conventional artificial neurons, i.e., the linear aggregating function
The common feature of conventional neural units is theirlinear aggregating operation (summation) of processed neural inputs, i.e. the signals from neural synapses.
1. Introduction
- Conventional Neural Units (Neurons)
10
© Ivo Bukovsky
CTU in Prague, FME
Conventional artificial neurons with nonlinear somatic operation (output function)
The conventional neural units are also distinct by their type of output mapping of the aggregated synapses
1. Introduction
- Conventional Neural Units (Neurons) (cont.)
11
© Ivo Bukovsky
CTU in Prague, FME
1. Introduction
- Conventional Neural Units (Neurons) (cont.)
Summation
Conventional artificial neurons with internal neural dynamics - the discrete case (recurrent neurons)
The conventional neural units are also distinct by theircontinuous or discrete dynamics.
12
© Ivo Bukovsky
CTU in Prague, FME
1. Introduction- Conventional Neural Units (Neurons) (cont.)
Common major classification of neural units (and neural networks) as based on conventional neural units architectures can be as follows
• Static Neural Units (and Networks)– According to nonlinear output mapping function,– According to purpose and learning methods,– …
• Dynamic Neural Units (and Networks)– Discrete (recurrent)– Continuous
There are many other classifying criteria such as the neural network functionality (SOM, SVM) , tapped delay implementations (TDNN), …
The new proposed classification of neural units does not interfere with the existing classification of neural networks.
The new proposed classification of neural units naturally emerges from the development of new neural units and from successful concepts in natural science and engineering tasks.
13
© Ivo Bukovsky
CTU in Prague, FME
1. Introduction- Motivation for the development of Nonconventional Neural Units
1. Introduction- Conventional Neural Units (Neurons) and Networks
- Motivation for the development of Nonconventional Neural Units
2. Development of New Neural Architectures (Units)
- Higher-Order Nonlinear Neural Units (QNU CNU HONNU )-- Static HONNU
-- Dynamic HONNU
- Time-Delay Dynamic Neural Units (TmD-DNU)
- Time-Delay Dynamic Higher-Order Nonlinear Neural Units
(TmD-HONNU)
⊂ ⊆
14
© Ivo Bukovsky
CTU in Prague, FME
1. Introduction
- Motivation for the development of Nonconventional Neural Units
• There are many motivating reasons to develop and use new neural architectures according to the individual engineering fields.
• We introduce only those major motivations that were leading us for development of new neural units.
– Sample by sample evaluation of complex dynamic system (time series).
15
© Ivo Bukovsky
CTU in Prague, FME
Analysis and
further evaluation
of complex systems
by NN is limited by
the natural black-
box effect of NN
(complicated
nonlinear input-
output mapping)
1. Introduction (cont.)
- Motivation for the development of Nonconventional Neural Units
16
© Ivo Bukovsky
CTU in Prague, FME
1. Introduction (cont.)
- Motivation for the development of Nonconventional Neural Units
1
1 2 2 1
1 1 2 2 2 1 1 1
( , )
( )( )
( )
( )( )
m
n n n
j
m m m m
n n n n n
j i j i
n
j j
n n
j j ji j j ij j i i
out f w u
w w w w u
y φ
φ φ φ φ φ− − −
−
− −
= = =
=
∑
∑ ∑ ∑ ∑
1 2 n-1 nw , W ,... W ,w ,u
KK
17
© Ivo Bukovsky
CTU in Prague, FME
Chaotic time series evaluating methods supported by the use of
artificial neural networks suffers from the black (gray) box effect of conventional ANN with nonlinear output neural function (e.g.
sigmoid) disables us from analyzing a knowledge hidden in a
trained network.
The need for a new neural architecture which mathematical
structure could be analyzed easier.
Small number of neural parameters, simple mathematical
architectures, and more computationally powerful neural units
(neurons).
1. Introduction (cont.)
- Motivation for the development of Nonconventional Neural Units
18
© Ivo Bukovsky
CTU in Prague, FME
Citing some ideas regarding Feed Forward NNand evaluation of chaotic time series:
[26] Vitkaj, J.: Analysis of Chaotic signals by Means of Neural Networks. [PhD. Thesis] (in Czech), Faculty of Mechanical Engineering, Czech Technical University in Prague, Czech Republic, 2001.
[27] Mankova, R.: Prediction of Chaotic Signals Using Neural Networks with Focus on Analysis of Cardiosignals [Candidate Dissertation] (in Czech), Faculty of Mechanical Engineering ,Czech Technical University in Prague, Czech Republic, 1997
1. Introduction (cont.)
- Motivation for the development of Nonconventional Neural Units
19
© Ivo Bukovsky
CTU in Prague, FME
• “Small” FFNN (~4/6/1) seemed to extract the characteristic orbit of the system or the characteristic transition between orbits ([26], p.59). The FFNN with a smaller number of neurons can learn the chaotic behavior starting from particular initial conditions and can capture the transition to another (coexisting) attractor.
• “Large” FFNN (16/24/1) generated signals with a frequency spectrum more similar to that of the original signal. A FFNN with a higher number of neurons tends to learn the dynamics of the system as a whole.
• Low-dimensional chaotic systems with appropriately “returnable orbits” can be very accurately predicted by simple neural models([26], p.62). A simple FFNN can very accurately predict systems behaving apparently on a single attractor.
1. Introduction (cont.)
- Motivation for the development of Nonconventional Neural Units
20
© Ivo Bukovsky
CTU in Prague, FME
• A predicting NN model does not have to characterize the modeled process as a whole; the extracted characteristic can be used forthe modeling and classification of chaotic systems ([26], p.71). Even though the trained NN describes only the actual dynamics of a system for the orbit (attractor) on which the system at that time behaves, it can be used for modeling and classification of chaotic signals.
• NN can extract an attractor’s geometrical characteristics from noisy data and from a low number of input data ([26], p.79);
• The design of a NN model of a patient will not be easy because of the complexity and multi-attractor behavior. ([26], p.104).
• NN models should be utilized also for decomposition of multi-attractor dynamics. Due to multi-attractor dynamics, the common nonlinear methods are incorrect over the whole length of the recorded signal, they do not converge, or they result in a significant variance of results, preventing precise and detailedmedical diagnoses. ([26], p.104).
1. Introduction (cont.)
- Motivation Conventional Neural Networks and Chaotic Time Series
21
© Ivo Bukovsky
CTU in Prague, FME
� A new adaptive methodology of evaluationof complex chaotic systems for monitoring and diagnosing purposes, such as HRV, was needed
– The need for a method of evaluation of important
characteristics, sudden and continuous changes in
dynamics of complex systems in a real-time, it
lead us to need for a
– foundation of a novel methodology for complex
signal variability evaluation (in a real time), it
required a tool
1. Introduction (cont.)- Motivation for the development of Nonconventional Neural Units
22
© Ivo Bukovsky
CTU in Prague, FME
� Development and use of nonconventional neural architectures (the tool)
– a minimum number of neural parameters and a
sufficiently strong approximating capability,
– cognitive capabilities of artificial neural networks,
and
– simplicity of mathematical notation of a problem
consisting in a low number of parameters and simple
dynamic structure.
1. Introduction (cont.)
- Motivation for the development of Nonconventional Neural Units
23
© Ivo Bukovsky
CTU in Prague, FME
The research work in 2003 followed two major goals:
To develop a tool thatcould capture andevaluate the propertiesof real-world (nonlinear
dynamic) complexsystems and that would enable the methodology be developed and applied in a real time.
To establish a new methodology for evaluation of complicated chaotic (also biomedical, HRV) signals, and for online monitoring and diagnosing purposes of complex systems.
1. Introduction (cont.)
- Motivation for the development of Nonconventional Neural Units
24
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures- Higher-Order Nonlinear Neural Units
1. Introduction- Conventional Neural Units (Neurons) and Networks
- Motivation for the development of Nonconventional Neural Units
2. Development of New Neural Architectures (Units)
- Higher-Order Nonlinear Neural Units (QNU CNU HONNU )
-- Static HONNU
-- Dynamic HONNU
- Time-Delay Dynamic Neural Units (TmD-DNU)
- Time-Delay Dynamic Higher-Order Nonlinear Neural Units
(TmD-HONNU)
⊂ ⊆
Q...Quadratic, C...Cubic
25
© Ivo Bukovsky
CTU in Prague, FME
HONNU – Higher-Order Nonlinear Neural Units
Year 2003:
– The concept of higher order nonlinear synaptic operation with focus on individual neural units had been introduced in [49] by M.M. Gupta et.al.
– Our first implementations of static HONNU into the static problems Redlapalli et.al. [50] and dynamic problems Song et.al.
[51], Bukovsky et.al. [52].
2. Development of New Neural Architectures- Higher-Order Nonlinear Neural Units
26
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures
- - Static HONNU … Higher Order Nonlinear Neural Unit
xa ... an
augmentedinput vector
into anonlinear
synaptic
operation
Wa …an
augmented
matrix of
neural weights
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
nonlinear aggregationof neural inputs
neural
inputs
neuraloutputfunction
neural
output
ν
xa
u0 … a constant neural bias
Static HONNU
(Higher-Order Nonlinear Neural Unit)
( , )HONNUf a ax W
xa ... an augmented input vector into a nonlinearsynaptic operation
Wa … an augmented matrix of neural weights
y
fHONNU ~ fQNU , fCNU, …QNU … quadratic neural unit
CNU … cubic neural unit
27
© Ivo Bukovsky
CTU in Prague, FME
neural inputs neural output
quadratic aggregationof neural inputs
neuraloutputfunction
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ νν y
xa
u0 … a constant neural bias
0where 1 .x =
Static Quadratic Neural Unit
xa ... an augmented input vectorinto a nonlinear aggregation function,
0
n n
i j
i j
ij
i
x wx= =
∑∑
2. Development of New Neural Architectures (cont.)
- - Static HONNU QNU … quadratic neural unit
0
( , )= ,n n
ij i j
i j i
Quadratic QNUf w x xv= =
= ∈∑∑ 1a ax W R
28
© Ivo Bukovsky
CTU in Prague, FME
0
1
i
n
u
u
u
u
M
M
n
i
u
u
u
M
M
1
( )φ ν
neural inputs
neuraloutputfunction
neuraloutput
y
xa
u0 ... a constant neural bias
Static Cubic Neural Unit0u
0
n n n
i j k
i j i k j
ijkx wx x
= = =
∑∑∑
cubic aggregationof neural inputs
xa ... an augmented input vector intoa nonlinear aggregation function
2. Development of New Neural Architectures (cont.)
- - Static HONNU CNU … cubic neural unit
0
Cubic
n n n
ijk i j k
i j i k j
w x x xν= = =
= =∑∑∑0
3 2 2 2 3000 0 001 0 1 123 1 2 3 133 1 3 1 1 ,
i j i k j
n nn n n nnn nw x w x x w x x x w x x w x x w x
= = =
− −= + + + + + + +
=
L L
0where 1 .x =
29
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures (cont.)
- - Static HONNU
Contrary to common neural units, static HONNU can be naturally viewed from the point of view of:
• System approximation (e.g. Taylor polynomial)
• Higher order input inter-correlations
2 32 3
0 0 0 02 31 1 1
0 0
1 1 1
( ) 1 ( ) 1 ( )( ) ( ) ( ) ( ) ( )
2 3!
( )
|
n n n
i i i i i i
i i ii i i
n n n n n n
i i ij i j ij i j k
i i j i i j i k j
f f ff f x x x x x x
x x x
f w x w x w x x w x x x
quadratic neural unit QNU
= = =
= = = = = =
∂ ∂ ∂≅ + − + − + − +
∂ ∂ ∂
≅ + + + +
− − − − − − − − − − −−
− − − − − − − − − − − −
∑ ∑ ∑
∑ ∑∑ ∑∑∑
x x xx x
x
L
K
|
;
cubic neural unit CNU
further higher order nonlinear neural units HONNU
− − − − − − − − − − − − − − −−
− − − − − − − − − − − − − − −K
30
© Ivo Bukovsky
CTU in Prague, FME
The correlationconcept of the
synapticoperation ofstatic HONNU
(Gupta, 2003)
( scan from“Static and …”Gupta
2003
2. Development of New Neural Architectures (cont.)
- Higher-Order Nonlinear Neural Units
31
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures (cont.)
- Higher-Order Nonlinear Neural Units
(Gupta et al., 2003)
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
nonlinear synaptic preprocessor
(nonlinear synaptic neural operation)
neural
inputs
static somatic
neural operationneural
output
ν
xa
u0 … a constant neural bias
Static HONNU
(Higher-Order Nonlinear Neural Unit)
( , )HONNUf a ax Wy
Sketch intuitively closer to the concept of inter-correlations
Sketch intuitively closer to the concept of function approximation
32
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures
- - Dynamic HONNU
1. Introduction- Conventional Neural Units (Neurons) and Networks
- Motivation for the development of Nonconventional Neural Units
2. Development of New Neural Architectures (Units)
- Higher-Order Nonlinear Neural Units (QNU CNU HONNU )
-- Static HONNU
-- Dynamic HONNU
- Time-Delay Dynamic Neural Units (TmD-DNU)
-Time-Delay Dynamic Higher-Order Nonlinear Neural Units
(TmD-HONNU)
⊂ ⊆
33
© Ivo Bukovsky
CTU in Prague, FME
0
1
j
n
u
u
u
u
ξ
M
M
neural outputfunction
0u
1
i
n
u
u
u
M
M
( )φ ξ
ξ
nonlinear synaptic and somatic aggregationof neural inputs and neural dynamics
(the past and present states affect future neural states)
neural
inputs
ν ( )ty
xa
1
s
1 1
0
n n
iji j
i j i
wx x+ +
= =
∑ ∑( )tξ
u0… a constant neural bias
Continuous Dynamic QNU(Quadratic Neural Unit)( )tu
The neural state is represented by the variable ξ which represents
the level of signal carried through axon forward to neural outputs.
2. Development of New Neural Architectures
- - Continuous Dynamic HONNU
s…the Laplace operator
34
© Ivo Bukovsky
CTU in Prague, FME
outputfunction
0
1
1
i
n
m
u
u
u
u
ξ ξ
M
M
M
0u
1
i
n
u
u
u
M
M
1( )φ ξ
( )m
tξ
mξν
y
xa
1
s
Continuous
Dynamic-Order-Extended Cubic Neural Unit(DOE CNU)
1ξ
1
s
nonlinear synaptic and somatic aggregation of neural inputs and neural dynamics
1( )tξ
L
neural
inputsneural
output
0
n m n m n m
i j k
i j i k
ijk
j
x x x w+ + +
= = =
∑ ∑ ∑
( )tu
2. Development of New Neural Architectures - - Continuous Dynamic HONNU (cont.)
s…the Laplace operator
35
© Ivo Bukovsky
CTU in Prague, FME
τi ≥ 0 for i =1..m, τmin >0, wm=1
output function
0
1
1
i
n
m
u
u
u
u
ξ ξ
M
M
M
0u
1
i
n
u
u
u
M
M
1( )φ ξ
( )m tξ
mξν
y
xa
u0 … a constant neural bias
Stability-Improved Dynamic-Order-Extended
Quadratic Neural Unit (DOE QNU)
1ξ
1( )tξ
L
nonlinear synaptic and somatic aggregationof neural inputs and neural dynamics
neural inputs
min
m
m
w
s ττ ⋅ + m
1
n1 is
w
τ τ⋅ +0
n m n m
iji j
i j i
wx x+ +
= =
∑ ∑
( )tu
2. Development of New Neural Architectures - - Continuous Dynamic HONNU (cont.)
s…the Laplace operator
36
© Ivo Bukovsky
CTU in Prague, FME
0u
1
i
n
u
u
u
M
M
( )φ ξ
nonlinear synaptic and somatic aggregationof neural inputs and neural dynamics
neural
inputs
output
function
neural
output
( )kν = ξ
y
xa
1
z
( )1k −ξ
u0… a constant neural bias
Discrete Dynamic QNU
0
1
j
n
u
u
u
u
ξ
M
M
1 1
0
n n
iji j
i j i
wx x+ +
= =
∑∑
( )1k −ξ
( 1)k −u
2. Development of New Neural Architectures
- - Discrete Dynamic HONNU
37
© Ivo Bukovsky
CTU in Prague, FME
0u
( )φ ξξ
neural
inputsoutputfunction
neural
output
ν y
xa
1
z
( )1k −ξ
u0… a constant neural bias
Discrete Dynamic CNU0
1
j
n
u
u
u
u
ξ
M
M
1 1 1
0
n n n
i j k
i j i k
i k
j
jx wx x
+ + +
= = =
∑∑∑
1
i
n
u
u
u
M
M
( 1)k −u
nonlinear synaptic and somatic aggregation
of neural inputs and neural dynamics
2. Development of New Neural Architectures
- - Discrete Dynamic HONNU (cont.)
38
© Ivo Bukovsky
CTU in Prague, FME
output function
0
1
( )
( 1)
i
n
k m
k
u
u
u
u
−
−
ξ ξ
M
M
M
0u
1
i
n
u
u
u
M
M
( )φ ξ
( )1k −ξ
( )kν = ξ
y
xa
1
z
u0 … a constant neural bias
Discrete Dynamic-Order-Extended QNU
1
z
( )k m−ξ
L
neural inputs
neural output
0
n m n m
iji j
i j i
wx x+ +
= =
∑ ∑
u
nonlinear synaptic and somatic aggregationof neural inputs and neural dynamics
2. Development of New Neural Architectures
- - Discrete Dynamic HONNU (cont.)
39
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU)
1. Introduction- Conventional Neural Units (Neurons) and Networks
- Motivation for the development of Nonconventional Neural Units
2. Development of New Neural Architectures (Units)
- Higher-Order Nonlinear Neural Units (QNU CNU HONNU )
-- Static HONNU
-- Dynamic HONNU
-Time-Delay Dynamic Neural Units (TmD-DNU)
-Time-Delay Dynamic Higher-Order Nonlinear Neural Units
(TmD-HONNU)
⊂ ⊆
40
© Ivo Bukovsky
CTU in Prague, FME
• The research group of Center of Applied Cybernetics
(CAK) at the Czech Technical University in Prague is
strongly focusing research on Time delay systems
Pavel Zitek
Tomas Vyhlidal
Goran Simeunovic
Thank you next door fellows for inspiration to develop
Time Delay Dynamic Neural Units.
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU)
41
© Ivo Bukovsky
CTU in Prague, FME
• The internal architecture of TmD-DNU is a
linear time-delay dynamic system that has an
infinite number of poles and zeros
• Some of the poles and zeros are significant; the
position of all is adapted by the learning
algorithm
This implies that TmD-DNU has a robust capability to approximate higher-order
dynamic systems
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU) Cont.
42
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU) (cont.)
Ti≥ 0 for i=0…n , τ ≥0 , τmin >0
( )ξ t1
smin
1
τ τ+
-1
wn
wj
w1
M
M
( )j tu
( )n tu
( )1 tu
M
M
Σ
0u
Tj
Tn
T1
w0
TmD1-DNU
ν(t)
T0
( )( )tφ ξy(t)
( )ξ t
u0 … a constant neural bias
Type 1 Time Delay Dynamic Neural Unit
s…the Laplace operator
43
© Ivo Bukovsky
CTU in Prague, FME
Ti≥ 0 for i=0…n , τ ≥0 , τmin >0
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU) (cont.)
Type 1 Time Delay Dynamic Neural Unit
Adaptable
{ }( ) ( ) ( ), where ( )i i is U G s U t sL ξΞ = = = = Ξ
min
0
( )
( )( ) ( )
( ) ( )
( ) i
n
i i
i
tt Tt
t t
du
dt
y
w
ξ
ξττ ξ
φ=
−⋅ + + =
=
∑
0
0 0
( ) ( ) ( )
min min
, where ( )1 1( ) ( )
i
i
n
i n ni
i i i
i i
Ti T
i
sU e s
es U G s U t s
ww
s sτ ττ τ
−−
=
= =
⋅Ξ = = = = Ξ+ ++ ⋅ + ⋅
∑∑ ∑
s…the Laplace operator L{ }…the Laplace transform
44
© Ivo Bukovsky
CTU in Prague, FME
Tf ≥ 0, Ti≥ 0 for i=0…n , τ ≥0 , τmin >0
wn
wj
w1
M
M
( )j tu
( )n tu
( )ξ t
( )1 tu
-1
Σ1
s
Tf
min
1
τ τ+
TmD2-DNU
Tj
Tn
T1
0u
w0
ν(t)
T0
( )( )tφ ξy(t)
( )fTtξ −
u0 … a constant neural bias
M
M
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU) (cont.)
Type 2 Time Delay Dynamic Neural Unit
s…the Laplace operator
45
© Ivo Bukovsky
CTU in Prague, FME
Tf ≥ 0, Ti≥ 0 for i=0…n , τ ≥0 , τmin >0
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU) (cont.)
Type 2 Time Delay Dynamic Neural Unit
Adaptable
{ }( ) ( ) ( ), where ( )i i is U G s U t sL ξΞ = = = = Ξ
0
( )
( )( ) ( )min
( ) ( ) .
( ) i i i
n
ifT T
tt t
t t
wd
udt
y ξ
ξξ
φ
ττ=
− −⋅ + =
=
+ ∑
0 0
( ) ( ) ( )
min
, where ( )( ) f
in n
i i i
i i
Ti
T
se
s U G s U t ssw
esτ τ
−
−= =
⋅Ξ = = = Ξ++ ⋅
∑ ∑
s…the Laplace operator L{ }…the Laplace transform
46
© Ivo Bukovsky
CTU in Prague, FME
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )ξ tν(t)Σ ( )G s
u0 … a constant neural bias
Tj
Tn
T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0 T0
Tf ≥ 0, Tj ≥ 0 for j = 0…n,
τ1 ≥ 0,τ2 ≥ 0 ,τmin >0,
( )ty( )( )tφ ξ
( ) ( )( )
min mi1 2 n
1( )
( ) ( ) 1fT sG s
s e s−
=+ + + +ττ τ τ
Dynamic-Order-Extended TmD2-DNU
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU) (cont.)
s…the Laplace
operator
47
© Ivo Bukovsky
CTU in Prague, FME
2. Development of New Neural Architectures
- Time-Delay Dynamic Neural Units (TmD-DNU)
1. Introduction- Conventional Neural Units (Neurons) and Networks
- Motivation for the development of Nonconventional Neural Units
2. Development of New Neural Architectures (Units)
- Higher-Order Nonlinear Neural Units (QNU CNU HONNU )
-- Static HONNU
-- Dynamic HONNU
- Time-Delay Dynamic Neural Units (TmD-DNU)
-Time-Delay Dynamic Higher-Order Nonlinear Neural Units
(TmD-HONNU)
⊂ ⊆
48
© Ivo Bukovsky
CTU in Prague, FME
Tf ≥ 0, Ti≥ 0 for i=0…n , τ ≥0 , τmin >0
1 1
0
n n
iji j
i j i
wx x+ +
= ≥
∑∑( )j tu
( )n tu
( )( )tξφ( )ξ t y(t)
( )1 tu
M
1
s
TmD1-DQNU
u0 … a constant neural bias
Tj
Tn
T1
M
xa
0
1
j
n
u
u
u
u
ξ
M
M
0u
T0
( )ξ t
2. Development of New Neural Architectures
- Time-Delay Dynamic HONNU (TmD-HONNU) (cont.)
Type 1 Time-Delay Dynamic Quadratic Neural Unit
s…the
Laplace
operator
49
© Ivo Bukovsky
CTU in Prague, FME
Tf ≥ 0, Ti≥ 0 for i=0…n
1 1
0
n n
iji j
i j i
wx x+ +
= =
∑∑( )j tu
( )n tu
( )( )tξφ( )ξ t y(t)
( )1 tu
M
1
s
TmD2-DQNU
u0 … a constant neural bias
Tj
Tn
T1
0u
M
xa
0
1
j
n
u
u
u
u
ξ
M
M
Tf
T0
( )fTtξ −
( )tν
2. Development of New Neural Architectures
- Time-Delay Dynamic HONNU (TmD-HONNU) (cont.)
Type 2 Time-Delay Dynamic Quadratic Neural Unit
s…the
Laplace
operator
50
© Ivo Bukovsky
CTU in Prague, FME
Ti≥ 0 for i=0…n and Tfi ≥ 0 for i=1…m, τ ≥0 , τmin >0, wij=0 for i>n+1 and j ≥1
T0
( )j tu
( )n tu
( )( )tξφ( )ξ t y(t)
( )1 tu
1
s
Tf1
TmD3-DQNU(hypothetically going
for higher types)u0 … a constant neural bias
Tj
Tn
T1
0u
0
1
1
ξ ξ ξ
M
M
M
M
j
n
j
m
u
u
u
u
xa
Tfm
Tfj
M
M
0
n m n m
iji j
i j i
wx x+ +
= =
∑∑M
M M
M
11( )fTtξ −
( )fjj Ttξ −
( )fmm Ttξ −
s…the
Laplace
operator
2. Development of New Neural Architectures
- Time-Delay Dynamic HONNU (TmD-HONNU) (cont.)
51
© Ivo Bukovsky
CTU in Prague, FME
T0
Ti≥ 0 for i=0…n and Tfi ≥ 0 for i=1…m,
t ≥0 , tmin >0, wij=0 for i>n+1 and j ≥1
( )j tu
( )n tu
( )( )tξφ( )ξ t y(t)
( )1 tu
1
s
Tf1
TmD3-DCNU(hypothetically going for
higher types)
u0… a constant neural bias
Tj
Tn
T1
0u
0
1
1
ξ ξ ξ
M
M
M
M
j
n
j
m
u
u
u
u
xa
Tfm
Tfj
M
M
M
M M
M0
n m n m n m
i j k
i j i k
ij k
j
x x x w+ + +
= = =
∑∑∑
11( )fTtξ −
( )fjj Ttξ −
( )fmm Ttξ −
2. Development of New Neural Architectures
- Time-Delay Dynamic HONNU (TmD-HONNU) (cont.)
s…the
Laplace
operator
52
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of HONNU and TmD-DNU
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
53
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules
- Adaptation of HONNU and TmD-DNU
UNKNOWN
SYSTEM
( )tu( )ty
( )r ty
TmD-DNU
ε (t)…error+-+-
weight adaptation
with the use of performance index J
21
2i
i i
Jw
w w
εµ µ
∂ ∂∆ = − = −
∂ ∂
21
2J ε=
...T Ti i fw τ∆ ∆ ∆ ∆
Neural parameters of both HONNU and TmD-DNU can be adaptedby the latest error ε(t) against the gradient of the performance index J
54
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules
- Adaptation of HONNU and TmD-DNU (cont.)
2)(
1Performance index:
2teJ =
( 1) ( ) ( )k k ki i iw w w+ ∆= +Neural parameter in next
computation step :
!!! The same approach applies to all static and dynamic HONNU aswell as to adaptable delays of TmD-DNU!!!
Such a convenient technique does not work that simply for TmD-HONNU � stability issues of nonlinear delay differential equations
))
((i
i i
tt
J
w
y
ww e
∂∆ = − = −
∂
∂ ∂µµ
( ) ( )( )rror:E t t try ye = −
Neural parameter increment
against the gradient of J :
learning rate
55
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of Static HONNU
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
56
© Ivo Bukovsky
CTU in Prague, FME
xa ... an
augmentedinput vector
into anonlinear
synaptic
operation
Wa …an
augmented
matrix of
neural weights
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
nonlinear aggregationof neural inputs
neural
inputs
neuraloutputfunction
neural
output
ν
xa
u0 … a constant neural bias
Static HONNU
(Higher-Order Nonlinear Neural Unit)
( , )HONNUf a ax W
xa ... an augmented input vector into a nonlinearsynaptic operation
Wa … an augmented matrix of neural weights
y
fHONNU ~ fQNU , fCNU, …QNU … quadratic neural unit
CNU … cubic neural unit
3. Learning Rules- Adaptation of Static HONNU
57
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of Static HONNU
[ ]
2
1 1
0
0 1 1
( ) ( ) ( )( 1)
( 1)
( 1) ( 1) ( 1)
( )1
2
( )( )
where 1 ...
,
r n nij
i i i
HONNU
i ij
n n
i j ij i jij i j i
i n j n
k k kk
k
k k k
y y yw
w w w
f
w w
x x w x xw
x x x u u uT
εµ µ ε µ ε
ξµ ε µ ε
ξ
µ ε µ εξ ξ
φ φ
φ φ+ +
= =
+
+
−
− − −
∂ − ∂∂∆ = − = − = =
∂ ∂ ∂
∂∂ ∂= = =
∂ ∂ ∂
∂ ∂ ∂ = =
∂ ∂ ∂
= =
∑∑
a a
a
Wx
x K K K
0 1 1( 1) ( 1) ( 1)where 1 ...i n j nk k kx x x u u uTT
− − − = = K K K
The case of Quadratic
Neural Unit
x0=1
58
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of Dynamic HONNU
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
59
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of DISCRETE Dynamic HONNU
{ }
{ }
2
( 1)
for the case of QNU
1( )
2
( )
( )=
,
,
mij HONNU
i ij
mHONNU
ij
mi j
kw D fw w
D fw
D x x
εµ µ ε
ξ
µ εξ
µ εξ
φ
φ
φ
+∂ ∂ ∂ ∆ = − = ⋅ =
∂ ∂ ∂
∂ ∂ = ⋅ =
∂ ∂
∂ = ⋅ ∂
a a
a a
W
W
x
x
output function
0
1
( )
( 1)
i
n
k m
k
u
u
u
u
−
−
ξ ξ
M
M
M
0u
1
i
n
u
u
u
M
M
( )φ ξ
( )1k −ξ
( )kν = ξ
y
xa
1
z
u0 … a constant neural biasDiscrete Dynamic-Order-Extended QNU
1
z
( )k m−ξ
L
neural inputs
neural output
0
n m n m
i ji j
i j i
wx x+ +
= =
∑ ∑
u
nonlinear synaptic and somatic aggregationof neural inputs and neural dynamics
z … step delay
D … delay
operator
60
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of CONTINUOUS HONNU (DOE HONNU)
0
1
1
i
n
m
u
u
u
u
ξ ξ
M
M
M
0u
1
i
n
u
u
u
M
M
1( )φ ξ
( )m tξ
mξν
y
xa
1
s
Continuous
Dynamic-Order-Extended Cubic Neural Unit(DOE-CNU)
1ξ
1
s
1( )tξ
L
0
n m n m n m
i j k
i j i k
ijk
j
x x x w+ + +
= = =
∑ ∑ ∑
( )tu
s…the Laplace operator
( ) ( )0 0 0
1
1
0 0 0
( ) ( )
( ) ( )( ), ( ) ( )
,
, ..., , ,
t t t
m
HONNU
t t t mm
HONNU m
t ty f d
d df d
d d
τ
τ ττ τ τ τ
τ τ
φ ξ φ
ξ ξφ ξ
−
−
= = =
=
∫ ∫ ∫
∫ ∫ ∫
aax
u
WK
K
61
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules
- Adaptation of CONTINUOUS HONNU (DOE HONNU) (cont.)
( )
( )
2
0 0 0
0 0 0
( ) ( ) ( ) ( )( 1)
( )
( )
for the case of QNU
( )1 ( )
2
,
,
( )=
rij
i i i i
t t tm
HONNUij
t t tm
HONNUij
t t t tk
y y yw
w w w w
f dw
f dw
τ τ
τ τ
ε ξµ µ ε µ ε µ ε
µ εξ
µ εξ
µ ε
φ
φ
φ
φ
+∂ −∂ ∂ ∂
∆ = − = − = = =∂ ∂ ∂ ∂
∂ ∂= ⋅ =
∂ ∂
∂ ∂= ⋅ = ∂ ∂
∂=
∂
∫ ∫ ∫
∫ ∫ ∫
a a
a a
x W
x W
K
K
( )0 0 0
( ) ( ) .
t t tm
i jx x dτ τ τξ
⋅ ∫ ∫ ∫K
62
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
63
© Ivo Bukovsky
CTU in Prague, FME
Tf ≥ 0, Ti≥ 0 for i=0…n , τ ≥0 , τmin >0
Type 1 Time Delay Dynamic Neural Unit
( )ξ t1
smin
1
τ τ+
-1
wn
wj
w1
M
M
( )j tu
( )n tu
( )1 tu
M
M
Σ
0u
Tj
Tn
min
0
( )
( )( ) ( )
( ) ( )
( ) i
n
i i
i
tt Tt
t t
du
dt
y
w
ξ
ξττ ξ
φ=
−⋅ + + =
=
∑
T1
w
0
TmD1-DNU
ν(t)
T0
( )( )tφ ξy(t)
( )ξ t
u0 … a constant neural bias
s…the Laplace operator
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
{ }0
0 0
( ) ( ) ( )
min min
, where ( )1 1( ) ( )
i
i
n
i n ni
i i i
i i
Ti T
i
sU e s
es U G s U t s
ww
s sL ξ
τ ττ τ
−−
=
= =
⋅Ξ = = = = Ξ+ ++ ⋅ + ⋅
∑∑ ∑
s…the Laplace operator
L{ }…the Laplace transform
64
© Ivo Bukovsky
CTU in Prague, FME
[ ]
( )
( )
( )
( )
( ) ( )
(( )
)
φ
φ −
∂= =
∂
∂=
∂
∂ ∂
∂
∂
∂
∂ i
-1 TmD NUi
i
i
D
t
t
t t
t
ts
x
w
G sL U
wx
w x
y
))
((i
i i
tt
J
w
y
ww e
∂∆ = − = −
∂
∂ ∂µµ
learning rate
0
0 0
( ) ( ) ( )
min min
, where ( )1 1( ) ( )
i
i
n
i n ni
i i i
i i
Ti T
i
sU e s
es U G s U t s
ww
s sτ ττ τ
−−
=
= =
⋅Ξ = = = = Ξ+ ++ ⋅ + ⋅
∑∑ ∑
s…the Laplace operator
L{ }…the Laplace transform
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
Type 1 Time Delay Dynamic Neural Unit
65
© Ivo Bukovsky
CTU in Prague, FME
s…the Laplace operator, L{ }…the Laplace transform
0( )( 1) ( )
( ) min
( )( )
( ) min
1
,1
( )
( )
i
i
nT
i i -1 i
ii
T -1 i i
tk t
t
tt
t
sw U e
T LT
ss w U e
L
s
s
µ εξ
µ εξ
τ τ
τ τ
φ
φ
−
=
−
+
∂ ∂ ∆ = = ∂ ∂ +
∂ = −
∂ +
+ ⋅
+ ⋅
∑
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
min
0
( )
( )( ) ( )
( ) ( )
( ) i
n
i i
i
tt Tt
t t
du
dt
y
w
ξ
ξττ ξ
φ=
−⋅ + + =
=
∑
Type 1 Time Delay Dynamic Neural Unit
66
© Ivo Bukovsky
CTU in Prague, FME
UNKNOWN
SYSTEM
1 ( )tu
( )ty
( )r ty
TmD-DNU1
e(t)
1( )1 1 −⋅ Ttw u
1
smin
1
( )τ τ+
µ
1∆T
Single input unit without bias
+-
--
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
67
© Ivo Bukovsky
CTU in Prague, FME
s…the Laplace operator, L{ }…the Laplace transform
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
min
0
( )
( )( ) ( )
( ) ( )
( ) i
n
i i
i
tt Tt
t t
du
dt
y
w
ξ
ξττ ξ
φ=
−⋅ + + =
=
∑Type 1 Time Delay
Dynamic Neural Unit
0( )( 1) ( )
( ) min 1( )
i
nT
i i -1 it
k tt
sw U e
Ls
τ µ εξ τ τ τφ
−
=+
∂ ∂ ∆ = = ∂ ∂ +
+ ⋅
∑
{ }( )
( )( ) min
( )1( )
-1tt
t
sL L t
sµ ε ξ
ξ τ τφ
∂= − ⋅
∂ + + ⋅
68
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
Type 1
Time
Delay
Dynamic
Neural
Unit
( )
0
02
0
( )( 1) ( )
( ) min
( )( )
( )min
( )( )
( ) min
1
1
1
( )
( )
( )
i
i
i
nT
i i -1 i
nT
i i -1 i
nT
i i -1 i
tk t
t
tt
t
tt
t
sw U e
L
ss w U e
L
sw U e
sL
s
s
s
τ µ εξ
µ εξ
µ εξ
τ τ τ
τ τ
τ τ
φ
φ
φ
−
=
−
=
−
=
+
∂ ∂ ∆ = = ∂ ∂ +
∂ = − =
∂ +
∂ −= ⋅
∂ +
+ ⋅
+ ⋅
+ ⋅
∑
∑
{ }
min
( )( )
( ) min
1
( )1
( )
( ) -1t
tt
sL L t
s
sµ ε ξ
ξ
τ τ
τ τφ
= +
∂= − ⋅
∂ +
+ ⋅
+ ⋅
∑
69
© Ivo Bukovsky
CTU in Prague, FME
s…the Laplace operator, L{ }…the Laplace transform
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
Type 2 Time Delay
Dynamic Neural Unit
( )
{ }( )
( )( ) ( )
( )
ft
t tT Tf f
tyT
ξ ξφ
ξµ µε ε
φ
=
∂∂∆ = = =
∂ ∂
-
-
min
1( ) ( ) .
( )
f
f
T
Tt
ss
L Y sss
e
eµ ε
τ τ
−
= + +
0
( )
( )( ) ( )min
( ) ( ) .
( ) i i i
n
ifT T
tt t
t t
wd
udt
y ξ
ξξ
φ
ττ=
− −⋅ + =
=
+ ∑
70
© Ivo Bukovsky
CTU in Prague, FME
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
( )
{ }
( )
( )
--1
2-
min
( )( ) ( )
( ) ( )1 2
1
( )
( ) (
1( ) (
)
) .
( )
f
f
f
T
T
tt t
T Tf f
G s U s
T f
t
t
T
t
s
yT
L U s
sw s
L U ss
s
e e
e
ξ ξφ
ξµ µ
µ ε
µε
τ τ
ε εφ
=
−=
−
∂∂∆ = = =∂ ∂
∂ = ∂
= + +
2
-
-
min
-
-
min
1
1
( ) (
( )
)( )( )
( ) .( )
f
f
f
f
T
f TmD DNUT
T
T
t
t
ss
T L G s U sss
ss
L Y sss
e
e
e
e
µ ετ τ
µ ετ τ
−−
−
∆ = =
+ ⋅ +
=
+ +
=
s…the Laplace operator
L{ }…the Laplace transform
71
© Ivo Bukovsky
CTU in Prague, FME
Single input unit without bias
UNKNOWN
SYSTEM
1( )tu( )ty
( )r ty
TmD-DNU
e(t)
∆ fT
1
smin
1
( )τ τ+
Tf
Tf
( )−f
Tty
+-
+-
µ
3. Learning Rules- Adaptation of Time-Delay Dynamic Neural Units (TmD-DNU)
72
© Ivo Bukovsky
CTU in Prague, FME
4. New Universal Classification of Neural Units
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU(not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
73
© Ivo Bukovsky
CTU in Prague, FME
Tf ≥ 0, Ti≥ 0 for i=0…n
1 1
0
n n
iji j
i j i
wx x+ +
= =
∑∑( )j tu
( )n tu
( )( )tξφ( )ξ t y(t)
( )1 tu
M
1
s
TmD2-DQNU
u0 … a constant neural bias
Tj
Tn
T1
0u
M
xa
0
1
j
n
u
u
u
u
ξ
M
M
Tf
T0
( )fTtξ −
( )tν
Learning Rules- Adaptation of Time-Delay HONNU (TmD2-QNU)
Type 2 Time-Delay Dynamic Quadratic Neural Unit
s…the
Laplace
operator
74
© Ivo Bukovsky
CTU in Prague, FME
( )
( )
( ) ( )
1
1
20
02 12 22
0
02 22 12
0
( 1) ( )
( ) ( ) ( ) ( ) ( )
( 1) ( ) ( ) ( ) ( )
,
2f f f f
f f f
TT T T T
T T TT
t
TmD Dff
f
QNU
t
t
k
k t t
f d
w w u w
T
d
w wT d
T
w u
τ τ
τ τ τ τ τ τ
τ τ τ
µ εξ
µ ε ξ ξ ξ ξξ
µ ε ξ ξ ξξ
φ
φ
φ
−+
− − − − −
+ = − + − + − −
∂ ∂ ≈ ⋅ ≈ ∂ ∂
∂ ≈ −
∆
∆
⋅ + +∂
∂− ⋅
∂
∫
∫
∫
a ax W
& & &
&
( ) ( )
,
where and fT ft t Tξ ν− = −
&
Partial development of the learning rule for TmD2-QNU:the adaptable time delay in the state feedback Tf
1. Learning Rules- Adaptation of Time-Delay HONNU (TmD2-QNU)
75
© Ivo Bukovsky
CTU in Prague, FME
4. New Universal Classification of Neural Units
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
76
© Ivo Bukovsky
CTU in Prague, FME
special
synaptic
pre-
processing
(adaptable
time
delays,…)
The new classification is based on three attributes of neural structure:1. the nonlinearity of aggregating operation ν, 2. the dynamic order of a unit (i.e., the # of time integrations of aggregated
variable ν, or step delays), and3. way of the adaptable time delays implementation (inputs, feedback).
conventional
nonlinear output
mapping
0u
n
i
u
u
u
M
M
1
( ( ))tφ ξ
neural
inputs
neural
output
ν y( , )
HONNUf a ax W
( )tξdynamic processor
0
1
j
n
u
u
u
u
M
M
ξξξξneural state feedback
nonlinear
aggregation
4. New Universal Classification of Neural Units (cont.)
77
© Ivo Bukovsky
CTU in Prague, FME
1 ( )
( )
( )
i
n
t
t
t
u
u
u
M
M
g(x(t))
( )tdx
dt ( )ty( )dt∫f(x(t),u(t))
( )tx
( )t =u
system
output
Mathematics and Technology
neural output
Synapse of Neural
Inputs
Nucleus
Dendrites
AxonSoma
Biology
Indeed, the state space representation seems to be a good concept that might be used to approach our understanding to higher computational capabilities of a real biological neurons
4. New Universal Classification of Neural Units
78
© Ivo Bukovsky
CTU in Prague, FME
4. New Universal Classification of Neural Units (cont.)
TmD-DNUHONNU
Nonconventional dynamic neural units classified by the aggregating operation
Static
Time Delay Dynamic Neural Units
Nonlinear Time-Delay
HONNU
Nonlinear Linear
Linear Dynamic-Order-Extended Time-DelayDynamic Neural Units
HONNU Dynamic
79
© Ivo Bukovsky
CTU in Prague, FME
4. New Universal Classification of Neural Units (cont.)
0
n n
i j
i j
ij
i
x wx= =
∑∑0
n n n
i j k
i j i k j
ijkx wx x= = =
∑∑∑
Dynam
ic Ord
er ( the #
of in
tegratio
ns o
f
aggreg
ated v
ariable ν
)
The nonlinearity of the aggregating operation ν
Conventional Dynamic
Linear Neural Units
(Linear Aggregating Function)
Conventional Static
Linear Neural Units(Linear Aggregating Function)
Static
QNUStatic CNU
Dynamic
QNU
Dynamic
CNU
DOE
QNU
DOE
CNU
0
n
i
i
ix w=
∑ν =
2+
1
0
Dynamic-Order Extended
Linear Neural Units (Linear Aggregating Function)
‘1’ ‘2’ ‘3’
Typ
eof
tim
e-d
elay
imple
men
tat
ion
# of integratio
ns of n
eural
aggregate
d variable ν
Nonlinearity of the
aggregating
operation ν
80
© Ivo Bukovsky
CTU in Prague, FME
0
n n
i j
i j
ij
i
x wx= =
∑∑0
n n n
i j k
i j i k j
ijkx wx x
= = =
∑∑∑
Type o
ftim
e-delay
implem
entatio
n
Nonlinearity of the aggregating operation ν
Linear Type-2 TmD-DNU(Linear Aggregating Function)
Linear Type-1 TmD-DNU(Linear Aggregating Function)
Type-1 TmD-DQNU
(TmD1-QNU)
Type-1 TmD-DCNU
(TmD1-CNU)
Type-2 TmD-DQNU
(TmD2-QNU)
Type-2 TmD-DCNU
(TmD2-CNU)
Extended TmD-DNU(Linear Aggregating Function)
Ext. TmD-DQNU(TmD3-QNU)
Ext. TmD-DCNU
(TmD3-CNU)
0
n
i
i
ix w=
∑ν =
‘2+’
‘2’
‘1’
‘1’ ‘2’ ‘3’
Typ
eof
tim
e-d
elay
imple
men
tat
ion
# of integratio
ns of n
eural
aggregate
d variable ν
Nonlinearity of the
aggregating
operation ν
4. New Universal Classification of Neural Units (cont.)
81
© Ivo Bukovsky
CTU in Prague, FME
Typ
eof
tim
e-d
elay
imp
lem
enta
tion
4. New Universal Classification of Neural Units (cont.)
Nonlinearity
of the
aggregating
operation νννν
conventional
neural units with
linear aggregating
operation ν
Static
LNU
Dynamic
LNU
Time-Delay Dynamic
Neural Units
(TmD-DNU)
Higher Order
Nonlinear Neural
Units (HONNU)# of integratio
ns of n
eural
aggregated variable νννν
The classification of artificial neural units is based on the three attributes of the neural structure
82
© Ivo Bukovsky
CTU in Prague, FME
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
83
© Ivo Bukovsky
CTU in Prague, FME
Synapse of
Neural
Inputs
Axon
Soma
Single dendrite neuron –
the concept of convetional
neural units, i.e. with
linear synaptic operation
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units (cont.)
84
© Ivo Bukovsky
CTU in Prague, FME
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
(Gupta et al., 2003)
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
nonlinear synaptic preprocessor
(nonlinear synaptic neural operation)
neural
inputs
static somatic
neural operationneural
output
ν
xa
u0 … a constant neural bias
Static HONNU
(Higher-Order Nonlinear Neural Unit)
( , )HONNUf a ax W
xa ... an augmented input vector into a nonlinearsynaptic operation
Wa … an augmented matrix of neural weights
y
Sketch closer to concept of inter-correlations(biologicaly less likely, see further slides)
Sketch closer to concept of function approximation, (biologicaly more
likely , see further slides)
85
© Ivo Bukovsky
CTU in Prague, FME
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
nonlinear aggregation of neural inputs
(nonlinear synaptic neural operation)
neural
inputs
static somatic
neural operationneural
output
ν
xa
u0 … a constant neural bias
Static
Higher-Order Nonlinear Neural Unit
( , )HONNU
f a ax W
xa ... an augmented input vector into a nonlinearsynaptic operation
Wa … an augmented matrix of neural weights
y
their parallel to a biological morphology of a neuron is not the same.(esp. for dynamic HONNU)
These two architectures are
analogical in mathematical
input-output mapping
function; however,
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units (cont.)
86
© Ivo Bukovsky
CTU in Prague, FME
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
nonlinear aggregation of neural inputs
(nonlinear synaptic neural operation)
neural
inputs
static somatic
neural operationneural
output
ν
xa
u0 … a constant neural bias
Static
Higher-Order Nonlinear Neural Unit
( , )HONNU
f a ax W
xa ... an augmented input vector into a nonlinearsynaptic operation
Wa … an augmented matrix of neural weights
y
In the bellow representation,,
not all neural inputs must
interact apparently already
on synaptic junctions,
correlations may also exist in
soma…
In the above representation,
all neural inputs are thought
to be correlated on synaptic
junctions for cases of a full
polynomial function.
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units (cont.)
87
© Ivo Bukovsky
CTU in Prague, FME
Synapse of
Neural
Inputs
Axon
Soma
Single dendrite neuron –
the concept of convetional
neural units, i.e. with
linear synaptic operation
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units (cont.)
88
© Ivo Bukovsky
CTU in Prague, FME
Synapse of Neural
Inputs
Soma
2 2 2 2 +2 +2
00 0
th A three-dendrite synaptic junction
( ) ( ) ( )h h h h h h
i i ij i j ijk i j k
i h i h j i i h j i k j
synaptic
r
w u w u u w u u ur
+ + + +
= = = = = =
ν = + +∑ ∑∑ ∑∑∑
u1
ν φ(ν)
u2
uh+2
uh uh+1
0
synaptic somatic
n n n
ijk i j k
i j i k j
w u u u
ν ν ν
= = =
= + =
=∑∑∑
a synaptic junction with
two dendrites:
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units (cont.)
r
89
© Ivo Bukovsky
CTU in Prague, FME
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
nonlinear aggregation
of neural inputs
neural
inputs
static somatic
neural operationneural
output
ν
xa
u0 … a constant neural bias
Static
Higher-Order Nonlinear Neural Unit
( , )HONNUf a ax W
xa ... an augmented input vector into a nonlinearsynaptic operation
Wa … an augmented matrix of neural weights
y
0
... ...
HONNU synaptic
n n n
ijk i j k
i j i k j
f
w u u u
ν ν
= = =
= = =
=∑ ∑∑
HONNU
synaptic somatic
fν
ν ν
= =
= +
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units (cont.)
90
© Ivo Bukovsky
CTU in Prague, FME
Generalized
aggregating
function of
HONNU.
0
... ...n n n
synaptic somatic ijk i j k
i j i k j
w u u uν ν ν= = =
= + =∑ ∑∑
nonlinear aggregation of neural
inputs
conventional
neural output
function
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
neural inputs
neural output
ν y
xa
u0 … a constant neural bias
Static
Higher-Order Nonlinear Neural Unit
( , )HONNU
f a ax W
xa ... an augmented input vector into
a nonlinear aggregation function
Wa … an augmented matrix of neural
weightsneural
output
Synapse of
Neural Inputs
Nucleus
Dendrites
Ax
on
Soma
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units (cont.)
91
© Ivo Bukovsky
CTU in Prague, FME
nonlinear aggregation of neural
inputs
conventional
neural output
function
0
1
i
n
u
u
u
u
M
M
0u
n
i
u
u
u
M
M
1
( )φ ν
neural inputs
neural output
ν y
xa
u0 … a constant neural bias
Static
Higher-Order Nonlinear Neural Unit
( , )HONNU
f a ax W
xa ... an augmented input vector into
a nonlinear aggregation function
Wa … an augmented matrix of neural
weights
neural
output
Synapse of
Neural Inputs
Nucleus
Dendrites
Ax
on
Soma
5. A Note on Biological Aspects of Mathematical
Structure of New Neural Units (cont.)
92
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications of New Neural Architectures
3. Learning Rules
- Adaptation of static HONNU
- Adaptation of dynamic HONNU
- Adaptation of TmD-DNU
- Adaptation of TmD-HONNU (not yet a simple generally stable technique)
4. New Universal Classification of Neural Units
5. A Note on Biological Aspects of Mathematical Structure of New Neural Units
6. Examples of Applications of New Neural Architectures
93
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Static modeling measured turbine loop variables
•18 variables was measured on a turbine loop system.
•Training data length = 2000 samples for each variable.
•Each variable was modeled by training static neural networks or static QNU with other variables as neural (model) inputs.
94
© Ivo Bukovsky
CTU in Prague, FME
0
1
i
n
u
u
u
u
M
M
0u
( )φ ν
somatic neural
ν y
xa
u0 … a constant neural bias
Static Quadratic Neural Unit
xa ... an augmented input vector intoa nonlinear synaptic operation
1 1
0
n n
iji j
i j i
wx x+ +
= =
∑∑
P06
6. Examples of Applications- Static modeling measured turbine loop variables (cont.)
T .... TemperatureP ... PressureF... Mass flow
x=u=[T40
T4_1
T4_2
P3
P4
P5
P7
P1_1
P1_2
T1_1
T1_2
T07
T09
P08_1
P08_2
AF01
Power_MW]
+-
yQNU=P06QNU
95
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Static modeling measured turbine loop variables (cont.)
• double hidden layerFFNN
• single hidden layerFFNN
• static QNU• measured data
96
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications
- Static modeling measured turbine loop variables (cont.)
• double hidden layerFFNN
• single hidden layerFFNN
• static QNU• measured data
97
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Static modeling measured turbine loop variables (cont.)
Results – The Performance of Various Neural Architectures :
• Ten two-hidden-layer NN (from 3-2-1 up to 7-3-1 networks, from 65
up to 154 neural weights) => Longest training, largest error �
• Ten single-hidden-layer NN (from 3-1 up to 7-1 networks, from 58 up
to 134 neural weights).... Shorter training, smaller error �
• Ten single static QNUs (153 neural parameters) … Fastest,
smallest error ☺☺☺☺, smallest error variance of neural outputs
98
© Ivo Bukovsky
CTU in Prague, FME
1.0
)()()()()( 35.12
=
=+++
a
ttttt rxxxax &&&The Plant:
6. Examples of Applications- Static QNU as an Adaptable State Feedback Controller
99
© Ivo Bukovsky
CTU in Prague, FME
)()()()()( 35.12:PlantThe ttttt rxxxax =+++ &&&
22
Quadratic
Neural Unit
as an
Adaptable
State
Feedback
Controller
with Variable
Damping
6. Examples of Applications- Static QNU as an Adaptable State Feedback Controller (cont.)
100
© Ivo Bukovsky
CTU in Prague, FME 22
6. Examples of Applications- Testing convergence of dynamic QNU during identification of
Tripod Dynamics
n
i
u
u
u
M
M
1
),( Wxf
10 =x
)(vφ
yv)(φ∫ dt(.) ∫ dt(.)
=
2
1
0
1
x
x
x
u
u
u
n
i
M
M
x
2x
1x
v
v
),( Wxfy ≈′′3x
Each robotic leg was identified by a single Continuous Dynamic Quadratic Neural Unit
Paralel manipulator TRIPOD
(Valasek et al., VVZ J04/98 212200008, …, CTU in Prague )
101
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Testing convergence of dynamic QNU during identification of
Tripod Dynamics (cont.)
n
i
u
u
u
M
M
1
),( Wxf
10 =x
)(vφ
yv)(φ∫ dt(.) ∫ dt(.)
=
2
1
0
1
x
x
x
u
u
u
n
i
M
M
x
2x
1x
v
Nonlinear synaptic and somatic neural
aggregation and neural dynamics
neural inputs position and velocity
Outputconsidered
linear
v
),( Wxfy ≈′′
))(()(
2
3
1
3
1
6
4
3
1
1 )(,70
jy
iy
iji
aew
ji
i ij
ijaa
i
iia
i
iaa
e
yywwuywywuwfy
−−
′′−+′++=≈′′ ∑∑∑∑
= ===
Wx
Linear part of neural dynamics Apriori designed nonlinear part of neural
dynamics
102
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Testing convergence of dynamic QNU during identification of
Tripod Dynamics (cont.)
where
wa, wae … neural weights
u1 … control variable of leg ay1,y2, y3 … actual piston lengths (positions)
))(()(
2
3
1
3
1
6
4
3
1
1 )(,70
jy
iy
ij
iaew
ji
i ij
ijaa
i
iia
i
iaa
e
yywwuywywuwfy
−−
′′−+′++=≈′′ ∑∑∑∑
= ===
Wx
Continuous Dynamic QNU
103
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Testing convergence of dynamic QNU during identification of
Tripod Dynamics (cont.)
Actual piston lengths (positions) y1 y2 y3
Aproximated piston lengths by DQNU
Error of dynamic approximation (DQNU)
Simulation for a similar control variables u1 u2 u3
104
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Testing convergence of dynamic QNU during identification of
Tripod Dynamics (cont.)
Simulation for less complicated control variable
Actual piston lengths (positions) y1 y2 y3 Error of dynamic approximation (DQNU)
Aproximated length of piston 1
105
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications- Testing convergence of dynamic QNU during identification of
Tripod Dynamics (cont.)
• The linear neural component prevailed during adaptation
• The nonlinearity of DQNU did not introduce instability problems during adaptation
• The nonlinearity improved neural model accuracy only slightly compared to pure linear adaptive model.
• Good and stable convergence of continuous DQNU has been observed
106
© Ivo Bukovsky
CTU in Prague, FME
• TRIPOD Manipulator– Goal 1: Special dynamic HONNU were tested for stability
and convergence
– In other words, if units with such a nonlinearity can even work…
– Goal 2: Approximation accuracy improvement
• Results– Stability excellent
– Convergence fast
– Approximation accuracy improvement in units of percents (a little better than purely linear)
– The designed nonlinearity does not correspond to the physical principles and is not a best suitable for this system
6. Examples of Applications- Testing convergence of dynamic QNU for Tripod Dynamics
(results)
107
© Ivo Bukovsky
CTU in Prague, FME
6. Examples of Applications
-Continuous Time-Delay Dynamic Neural Units
0(unbiased, =0):Approximated by a s
1
ingle input TmD-DNU
min 1 1
( )( ) ( )( )
w
fT Tt
t tdx
x udt
w− −+ + =ττ
⋅ 10
1( ) =
(2 +1)G s
s
2
108
© Ivo Bukovsky
CTU in Prague, FME
800 850 900 950 1000 1050 1100 1150
-1
-0.5
0
0.5
1
1.5
2
2.5
3
y(t)
yr(t)
u(t)
y(t)
yr(t)
u(t)
is neuraloutput from
TmD-DNU2
is output of the identified plant
is input
signal into the plant
and TmD-DNU2
time [sec]
… e(t) is error
u(t)
2approxima TmD-DNUted by⋅ 10
1( ) =
(2 + 1)G s
s
e(t)
6. Examples of Applications
-Continuous Time-Delay Dynamic Neural Units (cont.)
109
© Ivo Bukovsky
CTU in Prague, FME
0 100 200 300 400 500
-1
0
1
4
9
16
T1
TmD-DNU2 for Approximation - Neural Weight Convergence
time [sec]
e
τw
1T
1T
f
Tf
w1
τ
error
initial weights: τ =T1=T3=9
6. Examples of Applications
-Continuous Time-Delay Dynamic Neural Units (cont.)
110
© Ivo Bukovsky
CTU in Prague, FME
0 50 100 150 200 250 3000
0.2
0.4
0.6
0.8
1
no
overshot
0 20 40 60 80 100 120 140 160 180 2000
0.2
0.4
0.6
0.8
1
overshoot < 1%
0 20 40 60 80 100 120 140 160 180 2000
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100 120 140 160 180 2000
0.2
0.4
0.6
0.8
1
approximated by TmD-DNU⋅ 10
1( ) =
(2 + 1)G s
s
A
DC
Bcomparison
of step
responses of
TmD-DNU
adapted
from various
initial
conditions
6. Examples of Applications
-Continuous Time-Delay Dynamic Neural Units (cont.)
111
© Ivo Bukovsky
CTU in Prague, FME
• We applied the dynamic-order-extended TmD-DNU to
approximation of the double-tube heat exchanger
dynamics to obtain even more accurate linear
approximation of its complex nonlinear model.
( )
( )
( )
1,
1, 1 1 1, 1, 1 1 ,
1 1
2,
2, 1 2 2, 2, 1 2 ,
2 2
,
1 1, 2 2, 1 2 ,
1 1( ) ( ) ( ) ( )
1 1( ) ( ) ( ) ( )
( ) ( ) ( )
k
k w k k w w k
f f
k
k z k k w w k
f f
w k
w k w k w w w k
d tt T t t T t
dt T T
d tt T t t T t
dt T T
dK t K t K K t
dt
+ −
+ −
= − − + +
= − − +
= + − +
ϑϑ ϑ ϑ ϑ
ϑϑ ϑ ϑ ϑ
ϑϑ ϑ ϑ
double-tube heat
exchanger
The original linear approximation of the double-tube heat exchanger
Improved by DOE TmD-DNU
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
112
© Ivo Bukovsky
CTU in Prague, FME
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )ξ tν(t)Σ ( )G s
u0 … a constant neural bias
Tj
Tn
T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0 T0
Tf ≥ 0, Tj ≥ 0 for j = 0…n,
τ1 ≥ 0,τ2 ≥ 0 ,τmin >0,
( )ty( )( )tφ ξ
( ) ( )( )
min mi1 2 n
1( )
( ) ( ) 1fT sG s
s e s−
=+ + + +ττ τ τ
Dynamic-Order-Extended TmD2-DNU
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
113
© Ivo Bukovsky
CTU in Prague, FME
0 5000 10000 150000
1
2
3
4
t [sec]
w1(t)
Tf(t)
T1(t)
τ 2(t)
τ1(t)
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )( )φ tx( )txν(t) y(t)
Σ ( )G s
Dynamic-Order-Extended
TmD-DNU
u0= a constant neural bias
TjTj
TnTn
T1T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0w0 T0T0
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
114
© Ivo Bukovsky
CTU in Prague, FME
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )( )φ tx( )txν(t) y(t)
Σ ( )G s
Dynamic-Order-Extended
TmD-DNU
u0= a constant neural bias
TjTj
TnTn
T1T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0w0 T0T0
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
115
© Ivo Bukovsky
CTU in Prague, FME
0 100 200 300 400 500
0
0.05
0.1
0.15
yr(
t)
t [sec]
yn(t
)
yr
yn
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )( )φ tx( )txν(t) y(t)
Σ ( )G s
Dynamic-Order-Extended
TmD-DNU
u0= a constant neural bias
TjTj
TnTn
T1T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0w0 T0T0
Comparison of the step response of the nonlinear model of the
heat exchanger and the adapted DOE TmD-DNU
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
116
© Ivo Bukovsky
CTU in Prague, FME
Implementation of the genetic algorithm for
adaptation of time delays of DOE TmD-DNU
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
117
© Ivo Bukovsky
CTU in Prague, FME
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )( )φ tx( )txν(t) y(t)
Σ ( )G s
Dynamic-Order-Extended
TmD-DNU
u0= a constant neural bias
TjTj
TnTn
T1T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0w0 T0T0
0 200 400 600 800 1000-0.1
0
0.1
0.2
0.3
0.4
Error of DOE TmD-DNU adapted by Dynamic Backpropagation and Genetic Algorithm During Identification of the Double Tube
Heat Exchanger Dynamics
t [s]
err
or
(t) Genetic algorithm improved adaptation of
time delays ~ all neural parameters about 80x faster
Dynamic BP learning (convenient, but slow)
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
118
© Ivo Bukovsky
CTU in Prague, FME
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )( )φ tx( )txν(t) y(t)
Σ ( )G s
Dynamic-Order-Extended
TmD-DNU
u0= a constant neural bias
TjTj
TnTn
T1T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0w0 T0T0
0 5000 10000 150000
1
2
3
4
t [sec]
w1(t)
Tf(t)
T1(t)
τ 2(t)
τ1(t)
0 200 400 600 800 10000
0.5
1
1.5
2
2.5
3
t [s]
Neural Weights and Dynamic Neural Parameters of DOE TmD-DNU Adapted by Dynamic Backpropagation
w1
τ1
τ2
Genetic algorithm improved adaptation of time delays ~ all neural parameters 80x faster
Pure dynamic BP learning
(convenient, but slow)
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
119
© Ivo Bukovsky
CTU in Prague, FME
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )( )φ tx( )txν(t) y(t)
Σ ( )G s
Dynamic-Order-Extended
TmD-DNU
u0= a constant neural bias
TjTj
TnTn
T1T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0w0 T0T0
0 200 400 600 800 10000
1
2
3
4
t [s]
T1
Tf
Convergence of time delays of DOE TmD-DNU adapted by GA.
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
120
© Ivo Bukovsky
CTU in Prague, FME
T0,...,Tj,...Tn ≥ 0
wn
wj
w1
M
M
( )( )φ tx( )txν(t) y(t)
Σ ( )G s
Dynamic-Order-Extended
TmD-DNU
u0= a constant neural bias
TjTj
TnTn
T1T1
( )j tu
( )n tu
( )1 tu
M
M
0u
w0w0 T0T0
0 200 400 600 800 1000
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
t [s]
yn
yr
6. Examples of Applications-Dynamic-Order-Extended Time-Delay Dynamic Neural Units
121
© Ivo Bukovsky
CTU in Prague, FME
Notes on Using HONNU
– Scaling data may be necessary to assure convergence of a dynamic neural unit as well as to improve its convergence
– A significant area (volume) of the basin of attraction of the dynamic systems such as HONNU can be well expected in the vicinity of the origin.
– Eg., for used HONNU and TmD-DNU, at least one equilibrium point is always the origin [0,0,…,0], other equilibria are well expected “not far” from the origin as well. Values of usual R-R inter-beat diagrams oscillated within the range (0 , 1.2)
– Time series, such us R-R (inter-beat) diagrams did not have to be scaled when units are seconds (usually values <0, 1>)
THANK YOU
Ivo.Bukovsky@fs.cvut.cz
Google search: ‘Ivo Bukovsky’
New Adaptive Evaluation of Chaotic Time Series
NEXT Tutorial HERE15:40-17:00
Recommended