# Identification of Reduced-Oder Dynamic Models of Gas Turbines

W
Shared by:
Categories
-
Stats
views:
11
posted:
1/18/2010
language:
English
pages:
23
Document Sample

```							CSC Student Seminars
(Spring/Summer, 2006)

Identification of Reduced-
Oder Dynamic Models of
Gas Turbines
PhD Student: Xuewu Dai
Supervisor: Tim Breikin and Hong Wang
Introduction
 1. Introduction
 2. Reduced-order Model
 3. Long-term Prediction
 5. Nonlinear Least-Squares Optimization
 6. Future Works
1. Introduction
 Modlling of Gas Turbines
 Fault Detection
 Condition Monitoring
Aims
 Reducing Computational Complexity:
Real time
 Improving Prediction Accuracy:
Long-term prediction
 Robustness
2. Reduced Order
 Thermodynamic        models:
1. High order : 26th
2. Non-linear


Linearisation   Our ARX models :
1. Reduced order: 1st, 2nd …
2. Linear: y (t )   T  z (t )
ˆ
  [a1 ... an b1 ... bm ]T
z (t )  [ y(t  1)... y(t  n) u(t  1) ... u(t  m)]
T
3. Long-term Prediction

u (t )                      y (t )    u (t )                       y (t )

Model        ˆ
y (t )                    Model         ˆ
y (t )

a. One-step Ahead Prediction Model       b. Long-term Prediction Model
Model Equations
y(t )    z (t )
ˆ         T

z(t )  [ y(t  1)... y(t  n) u(t 1) ... u(t  m)]    T

2. Long-term prediction
y(t )    z (t )
ˆ           ˆT

z(t )  [ y(t  1) ... y(t  n) u(t  1) ... u(t  m)]
ˆ         ˆ            ˆ                                 T
Challenges
 Computational Burden
How many iterations need to identify the parameters?
   Dependency of Prediction Errors (Non-Gaussian Noise)
MSE=9.1318

 ( y(t )  y(t )) 2 / N
ˆ

Autocorrelation of prediction errors
 Objective     Function
1 N            1 N
E ( )   ( (t )) 2   ( y (t )   T  z (t )) 2
ˆ
2 t 1         2 t 1

  E ( ) 1 N  2 (t ) N            (t )
               (t ) 
      2 t 1       i 1        
 (t )
  g (t )


 y (t )    z (t )
ˆ        T
ˆ                                    (a)

       E ( )


  k 1                                                (b)
  
k

                                    k

   E ( )         N

    ( y (t )  y (t ))  g (t )
ˆ                    (c)
                   t 1

 g (t )  z (t )  [ g (t  1)... g (t  n) 0...0]  (d)
          ˆ
DGD SearchRoute
0.8

0.7

0.6

0.5

0.4
b

0.3

0.2

0.1

0
0.3   0.4   0.5   0.6                     0.7   0.8   0.9   1

Results 1: deepest direction          a
DGD+BFGS SearchRoute

0.7

0.6

0.5
b

0.4

0.3             Traing Error
300

200
0.2

100

0
0   10 0.1 20   30    40   50      60    70   80   90     100
0.4           0.5
DGD+BFGS Iteration 0.6         0.7         0.8       0.9    1
a
BFGS direction
5. Nonlinear Least-squares
Optimization (Gauss-Newton)
                                    E ( k )
    k 1   k   * [ R(k )] 
1
(a)
                                      k
   R(k)  J T  J                                                   (b)

     E ( k )        N

                  [( y (t )  y (t )]  g (t , k )
ˆ                                   (c)
       k            t 1
   g (t , k )  z (t  1)  [ g (t  1, k )...g (t  n, k ) 0...0]  (d)
ˆ

   J  [ g (1, k ) g (2, k )...... g ( N , k )]T                   (e)


Search direction, step size and
initial value
 Search direction:
Nonlinear Least Squares: Gauss-Newton
 Step size:
 Initial value:
Blind guess: [0.5 0.5 0.5 0.5]
LSE: [1.2805 -0.29191 0.10582 0.15903]
Result 3 Gauss-Newton
Gauss-Newton + Bisection SearchRoute

0.6

0.5

0.4
b

0.3
Traing Error
200

150
0.2
100

50
0.1
0   0.4         0.5               0.6               0.7        0.8   0.9   1
1     2       3          4            5
a 6   7
Gauss-Newton + Bisection Iteration
Prediction of 1st Order Model
Real output(blue) vs Estimation output(red)
50
output

0

-50
0           500                     1000            1500
time
Error Curve   The mean squared error is9.1318
10

5
error

0

-5

-10
0           500                     1000            1500
time
Comparison of 1st Order Model
Methods           MSE                a              b      Iterations

LSE             23.49449        0.987395       0.032551     1
ANFIS           22.2925         N/A            N/A          200
GD              11.0163         0.9809         0.0376       N/A
Exhausted       9.131926        0.977774       0.043542     10000
Search
DGD1*           9.131816 0.977764 0.043568 1000
DGD2*           9.131786 0.777774 0.043544 101
DGD3*           9.131785 0.977776 0.043543 98
•DGD1: Deepest descent direction and adjusting step size
•DGD2: BFGS direction and adjusting step size
•DGD3: Gauss-Newton and line search
High Order Model
y(t )  a1 y(t 1)  a2 y(t  2)  b1u(t 1)  b2u(t  2)
ˆ
initial (by LSE) : [1.2805        -0.29191 0.10582              0.15903]
final:              [1.8604 -0.8641           0.07045       -0.007475]
Engine output(dashed) vs Model Preditcion(solid)
50
NHP

0

-50
0                      500                     1000                    1500
sample Time
Long-term Prediction Error: The MSE is:3.31361166E+000
10

5
Error

0

-5
0                      500                     1000                    1500
Sample Time
6. Future Works
value Problem:
 Initial
 Robustness Problem: ???
 Applying such learning algorithm to Neural
Networks
 Model structure selection by autocorrelation
of prediction errors
 NARMX models
CSC Student Seminars
(Spring/Summer, 2006)

Thanks
Appendix
Initial value problem
manual setting of initial value
setting initial value by LSE
[1.2805   -0.29191 0.10582     0.15903]   [0.5 0.5 0.5 0.5]

[1.8604 -0.8641 0.07045 -0.007475]        [1.8604 -0.8641 0.07045 -0.007475]
Final MSE=3.313612
Final MSE=8.60188
appendix
 (t ) ( y (t )  y (t ))
ˆ          y (t )
ˆ
                    
                           

 ( y (t ))
ˆ
g(t) 

 ( T  z (t , ))
ˆ


z (t , )
ˆ
 z (t , ) 
ˆ                        

 z (t , )  g(t  1 ) g(t-2 ) ... g(t-n) 0 ... 0
ˆ

```
Related docs