# feeedback_xian_may09

Shared by:
Categories
Tags
-
Stats
views:
2
posted:
2/23/2012
language:
pages:
66
Document Sample

```							    Feedback:
Still the simplest and best solution
Applications to self-optimizing control and stabilization of new operating regimes

Department of Chemical Engineering
Norwegian University of Science and Technology (NTNU)
Trondheim

1   Xi’an May 2009
Trondheim, Norway

Xi’an

3
Arctic circle

North Sea
Trondheim

NORWAY      SWEDEN

Oslo

DENMARK

GERMANY
UK
4
NTNU,
Trondheim

5
Outline

• I. Why feedback (and not feedforward) ?
• The feedback amplifier
• II. Self-optimizing control:
• How do we link optimization and feedback?
• What should we control?
• III. Stabilizing feedback control:
• Anti-slug control
• Conclusion

7
Example: AMPLIFIER

r             G         y
Amplifier

Want: y(t) = α r(t)

Solution 1 (feedforward):
G = α (adjust amplifier gain)

Very difficult to in practice
• Cannot get exact value of α
• Cannot easily adjust α online
• Do not get same amplification at all frequencies
8       • Problems with distortion and nonlinearity
Black’s feedback amplifier (1927)

r              G          y
Amplifier

K2
Measured y

Want: y(t) = α r(t)

Solution 2 (feedback):
G = k (any large amplifier gain, k > α)
Closed-loop response

9           MAGIC! Independent of G, provided GK2 >> 1
Example: disturbance rejection
1
d

Gd                          k=10

u
G                            25   time
y
Plant (uncontrolled system)

10
1. Feedforward control (measure d)

d

Gd
u
G
y

”Perfect” feedforward control: u = - G-1 Gd d
Our case: G=Gd → Use u = -d

11
1.Feedforward control: Nominal (perfect model)
d
Gd

u
G        y

12
2. Feedback control

d
Gd
ys   e
C   u
G        y

13
2. Feedback PI-control: Nominal case
d
Gd
ys      e
C      u
G         y

Input u                               Output y

Feedback generates inverse!

Resulting output

14
Robustness comparison

• Gain error,            k = 5, 10 (nominal), 20
• Time constant error,   τ = 5, 10 (nominal), 20
• Time delay error,      θ = 0 (nominal), 1, 2, 3

15
Robustness: Gain error,
k = 5, 10 (nominal), 20

1. FEEDFORWARD

2. FEEDBACK

16
Robustness: Time constant error,
τ= 5, 10 (nominal), 20

1. FEEDFORWARD

2. FEEDBACK

17
Robustness: Time delay error,
θ = 0 (nominal), 1, 2, 3

1. FEEDFORWARD

2. FEEDBACK

18
Conclusion: Why feedback?
(and not feedforward control)
•   Simple: High gain feedback!
•   Counteract unmeasured disturbances
•   Reduce effect of changes / uncertainty (robustness)
•   Change system dynamics (including stabilization)
•   Linearize the behavior
•   No explicit model required

• MAIN PROBLEM:
Potential instability (may occur “suddenly”) with time delay/RHP-zero

Unstable (RHP) zero: Fundamental problem with feedback!
Does not help with detailed model + state estimator (Kalman filter)…
20
Outline

• I. Why feedback (and not feedforward) ?
• II. Self-optimizing feedback control:
• How do we link optimization and feedback?
• What should we control?
• III. Stabilizing feedback control: Anti-slug control
• Conclusion

21
Optimal operation (economics)

• Define scalar cost function J(u0,x,d)
• u0: degrees of freedom
• d: disturbances
• x: states (internal variables)
• Optimal operation for given d.
Dynamic optimization problem:

minu0 J(u0,x,d)
subject to:
Model:      f(u0,x,d) = 0
Constraints: g(u0,x,d) < 0

Here: How do we implement optimal operation?
22
1. ”Obvious” solution:
Optimizing control =
”Feedforward”

Estimate d and compute new uopt(d)

Probem: Complicated and
sensitive to uncertainty

23
2. In Practice: Feedback implementation

Issue:
What should we control?

24
Process control hierarchy

RTO

y1 = c ? (economics)
MPC

PID

25
What should we control?

• CONTROL ACTIVE CONSTRAINTS!
– Optimal solution is usually at constraints, that is, most of the degrees of
freedom are used to satisfy “active constraints”, g(u0,d) = 0
– Implementation of active constraints is usually simple.

• WHAT MORE SHOULD WE CONTROL?
– But what about the remaining unconstrained degrees of freedom?
– Look for “self-optimizing” controlled variables!

26
Self-optimizing Control

•   Definition Self-optimizing Control
– Self-optimizing control is when acceptable
operation (=acceptable loss) can be achieved using
constant set points (cs) for the controlled variables c
c=cs
(without the need for re-optimizing when
disturbances occur).

27
Optimal operation – Runner
– Cost: J=T
– One degree of freedom (u=power)
– Optimal operation?

28
Optimal operation - Runner

Solution 1: Optimizing control

• Even getting a reasonable model
requires > 10 PhD’s  … and
the model has to be fitted to each
individual….

• Clearly impractical!

29
Optimal operation - Runner

Solution 2 – Feedback
(Self-optimizing control)

– What should we control?

30
Optimal operation - Runner

Self-optimizing control: Sprinter (100m)

• 1. Optimal operation of Sprinter, J=T
– Active constraint control:
• Maximum speed (”no thinking required”)

31
Optimal operation - Runner

Self-optimizing control: Marathon (40 km)

• Optimal operation of Marathon runner, J=T
• Any self-optimizing variable c (to control at
constant setpoint)?
•   c1 = distance to leader of race
•   c2 = speed
•   c3 = heart rate
•   c4 = level of lactate in muscles

32
Optimal operation - Runner

Conclusion Marathon runner

select one measurement

c = heart rate

• Simple and robust implementation
• Disturbances are indirectly handled by keeping a constant heart rate
33              • May have infrequent adjustment of setpoint (heart rate)
Unconstrained optimum

Optimal operation

Cost J

Jopt

copt   Controlled variable c

35
Unconstrained optimum

Optimal operation

Cost J                                              d

Jopt

n

copt       Controlled variable c
Two problems:
• 1. Optimum moves because of disturbances d: copt(d)
36
• 2. Implementation error, c = copt + n
Unconstrained optimum

Candidate controlled variables c
for self-optimizing control
Intuitive
1. The optimal value of c should be insensitive to disturbances (avoid
problem 1)
•   Ideal self-optimizing variable is gradient, c = Jus
•   Optimal value is always Ju=0 (gradient change sign at optimum)

2. Optimum should be flat (avoid problem 2 – implementation error).
Equivalently: Value of c should be sensitive to degrees of freedom u.
•   “Want large gain”, |G|
•   Or more generally: Maximize minimum singular value,

38
Unconstrained optimum

Maximum gain rule (Skogestad and Postlethwaite, 1996):
Look for variables that maximize the scaled gain (Gs)
(minimum singular value of the appropriately scaled
steady-state gain matrix Gs from u to c)

39
Unconstrained optimum

Proof: Local analysis
cost J
c=Gu

uopt   u

40
Unconstrained optimum

Optimal measurement combinations

Exact solutions for quadratic optimization problems

1. Nullspace method. No loss for disturbances (d)

2. Generalized (with noise n)

•    c = Hy can be considered as linear invariants for the quadratic optimization
problem – which can be used for feedback implementation of optimal solution!
•    Application: Explicit MPC
* V. Alstad, S. Skogestad and E.S. Hori, Optimal measurement combinations as controlled variables,
41         Journal of Process Control, 19, 138-148 (2009)
Example: CO2 refrigeration cycle

pH
J = Ws (work supplied)
DOF = u (valve opening, z)
Main disturbances:
d 1 = TH
d2 = TCs (setpoint)
d3 = UAloss

What should we control?

42
CO2 cycle: Maximum gain rule

43
Conclusion CO2 refrigeration cycle

44    Self-optimizing c= “temperature-corrected high pressure”
Outline

•   I. Why feedback (and not feedforward) ?
•   II. Self-optimizing feedback control: What should we control?
•   III. Stabilizing feedback control: Anti-slug control
•   IV. Conclusion

45
Application stabilizing feedback control:
Anti-slug control

Two-phase pipe flow
(liquid and vapor)
Slug (liquid) buildup
46
Slug cycle (stable limit cycle)
Experiments
performed by
the
Multiphase
Laboratory,
NTNU

47
Flow map with open valve
1
Pulsing flow
Pulsing flow                  Pulsing/Slugging
0.8
Riser slugging

0.7

0.6
Uso [m/s]

0.5

0.4
0.3

0.2

0.1

0
0    0.5    1    1.5     2      2.5      3   3.5      4      4.5      5
48                                                  Usg [m/s]
Experimental mini-loop

49
z
p2
Experimental mini-loop
Valve opening (z) = 100%

p1

50
z
p2
Experimental mini-loop
Valve opening (z) = 25%

p1

51
z
p2
Experimental mini-loop
Valve opening (z) = 15%

p1

52
z
Experimental mini-loop:                p2

Bifurcation diagram

p1
No slug

Valve opening z %
53                                          Slugging
Avoid slugging?

•   Operate away from optimal point
•   Design changes
•   Feedforward control?
•   Feedback control?

54
Design change                                         z
p2
Avoid slugging:
1. Close valve (but increases pressure)

p1
No slugging when valve is closed

Valve opening z %
55
Design change

Avoid slugging:
2. Other design changes to avoid slugging

z
p2

p1

56
Design change

Minimize effect of slugging:
3. Build large slug-catcher

z
p2

p1

• Most common strategy in practice

57
Avoid slugging: 4. Feedback control?
Comparison with simple 3-state model:

Valve opening z %

Predicted smooth flow: Desirable but open-loop unstable
58
Avoid slugging:
4. ”Active” feedback control

ref
PC
z

PT

p
1

Simple PI-controller
59
Anti slug control: Mini-loop experiments

p1
[bar]

z
[%]

Controller ON   Controller OFF
60
Anti slug control: Full-scale offshore
experiments at Hod-Vallhall field (Havre,1999)

61
Analysis: Poles and zeros                                          Topside          FT

Operation points:                                                  ρT

P1        DP         Poles
z                                                                                  DP
-6.11               P1
0.175           70.05     1.94
0.0008±0.0067i
-6.21
0.25         69       0.96
0.0027±0.0092i
Zeros:
y
P1 [Bar]   DP[Bar]   ρT [kg/m3]   FQ [m3/s]      FW [kg/s]
z
-0.0034    3.2473       -0.0004    -4.5722        -7.6315
0.175                    0.0142        0.0048    -0.0032        -0.0004
-0.0004           0
-0.0034    3.4828       -0.0004    -4.6276        -7.7528
0.25                     0.0131        0.0048    -0.0032        -0.0004
-0.0004           0
62
Topside measurements: Ooops.... RHP-zeros or zeros close to origin
Stabilization with topside measurements:
Avoid RHP-zeros by using 2 measurements

• Model based control (LQG) with 2 top measurements: DP and
density ρT
63
Summary anti slug control

•    Stabilization of smooth flow regime = \$\$\$\$!
•    Stabilization using downhole pressure simple
•    Stabilization using topside measurements possible
•    Control can make a difference!

Thanks to: Espen Storkaas + Heidi Sivertsen + Håkon Dahl-Olsen + Ingvald Bårdsen

64
Conclusions

• Feedback is an extremely powerful tool
• simple
• robust
• Complex systems can be controlled by hierarchies (cascades) of single-
input-single-output (SISO) control loops
• Control the right variables to achieve ”self-optimizing control”
• Feedback can make new things possible
• Stabilization (anti-slug)

S. Skogestad. "Feedback: Still the simplest and best solution" Presented at ICIEA 2009, Xi’an, China, May 2009

65
Engineering systems

• Most (all?) large-scale engineering systems are controlled using
hierarchies of quite simple single-loop controllers
– Commercial aircraft
– Large-scale chemical plant (refinery)
• 1000’s of loops
• Simple components:
on-off + P-control + PI-control + nonlinear fixes + some feedforward

Same in biological systems

66
Self-optimizing control: Recycle process
J = V (minimize energy)

5

4

1

Given feedrate F0 and                                    2
column pressure:
Nm = 5                                                 3
3 economic (steady-              Constraints: Mr < Mrmax,
state) DOFs                        xB > xBmin = 0.98
67
DOF = degree of freedom
Recycle process: Control active constraints

Active constraint                            Remaining   DOF:L
Mr = Mrmax

Active constraint
xB = xBmin

One unconstrained DOF left for optimization:
68          What more should we control?
Conventional:
Looks good

Luyben snow-ball
rule: Not promising
economically

69
Recycle process: Loss with constant setpoint, cs

Large loss with c = F (Luyben rule)

Negligible loss with c =L/F
or c = temperature

70
Recycle process: Proposed control structure
for case with J = V (minimize energy)

Active constraint
Mr = Mrmax

Active constraint
xB = xBmin

Self-optimizing loop:
Adjust L such that L/F is constant

71

```
Related docs
Other docs by lanyuehua
（なまえ１）と（なまえ2）の１日