# Tutorial 2 – Euler Lagrange

Document Sample

```					Tutorial 2 – Euler Lagrange

In one sentence:
d
Fy =      Fz
dx

Important facts:
1) The solution of EL equation is called extermals.
2) Minmum / Maximum of the "Most Simple problem" is also an extermal.
3) It is easier to solve EL and check if we received a solution for our problem.
4) EL is actually a second order partial equation. Most of the problems do not have
an easy solution. We solve them numerically.
5) In the theory of Calculus of Variation there are more terms that deal with:
Many variables.
Higher derivatives.
Sufficient conditions for minimum / maximum
Corner points
Multi dimensional Integrals

In the following slides we will prove EL equation in its most simple variation. We will
show results for min/max decision and 2D integral.

The proof will rely on 3 lemmas.

Lemma 1
u(x) is a continues function on [a,b].
b
Suppose that I [h] = ∫ u ( x ) h ( x ) dx = 0 ∀h ∈ C1 [ a, b ] , h ( a ) = h ( b ) = 0
a

Then u(x)=0 on [a,b]

Proof Lemma 1
Lets assume that exists c in [a,b] so that u ( c ) ≠ 0 . Wlog u ( c ) > 0 .
Since u is continues that exists [a',b'] such that u ( x ) > 0 ∀x ∈ (a',b')
Then we will define h(x):
⎧( x − a ')2 ( b '− x )2 x ∈ [ a ', b ']
⎪
h ( x) = ⎨
⎪
⎩          0             otherwise
Then h fulfills our demands given above.
But:
b'

∫ u ( x ) h ( x ) dx > 0
a'
Lemma 2 (Devoa Ramon)

Let v ( x ) ∈ Cs1 [ a, b ] and let assume
b

∫ v ( x ) h ' ( x ) dx = 0       ∀h ∈ Cs1 [ a, b ], h ( a ) = h ( b ) = 0, h ' ∈ Cs1 [ a, b ]
a

Then exists a constant c that v ( x ) = c for all continues points .

Proof Lemma 2
b
1
v ( x ) dx
b−a ∫
We will define c =
a
b                       b
Then note that           ∫ ( v ( x ) − c ) dx = ∫ v ( x ) dx − c ( b − a ) = 0
a                       a
x
Now we will define h ( x ) = ∫ ( v ( t ) − c ) dt
a

Note that h(a)=h(b)=0
h '( x) = v ( x) − c
h ∈ Cs1 [ a, b ]

Lets calculate:
b                            b                               b                     b

∫ (v ( x) − c)        dx = ∫ ( v ( x ) − c ) h ' ( x ) dx = ∫ v ( x ) h ' ( x ) − c ∫ h ' ( x ) dx = 0
2

a                            a                               a                     a

Then v(x)=c

Remark:
The calculation above should be made for each continues part but the conclusion is the
same.

Lemma 3

Given u ( x ) ∈ Cs [ a, b ] , v ( x ) ∈ Cs [ a, b ]
We assume:
b
I [ h ] = ∫ ( u ( x ) h ( x ) + v ( x ) h ' ( x ) ) dx = 0 ∀h ∈ Cs1 [ a, b ] , h ( a ) = h ( b ) = 0
a

Then:
v ( x ) ∈ Cs1 [ a, b ]
x
v ( x ) = ∫ u ( t ) dt + c
a
Proof lemma 3
x
Let U ( x ) = ∫ u ( t ) dt
a

Then U ' ( x ) = u ( x )
b                                        b                                                  b
b
∫ u ( x ) h ( x ) dx = ∫ U ' ( x ) h ( x ) dx = U ( x ) h ( x ) a − ∫ U ( x ) h ' ( x )
a                                        a                                                  a

So,
b                                                           b
I [ y ] = ∫ ( u ( x ) h ( x ) + v ( x ) h ' ( x ) ) dx = ∫ ( −U ( x ) + v ( x ) ) h ' ( x ) dx
a                                                           a

From lemma 2 we get:
−U ( x ) + v ( x ) = c
x
v ( x ) = ∫ u ( t ) dt + c
a

Reminder From Tutorial 1:
We would like to measure distances between the functions in order to be able to define an extermum:

f   C [ a ,b ]
= sup X ∈[a ,b] f ( x )

f   Cs [ a , b ]
= sup X ∈[a ,b] f ( x )
f   C1 [ a ,b ]
= f      C [ a ,b ]
+ f ' C [ a ,b ]
f   Cs1 [ a ,b ]
= f     Cs [ a , b ]
+ f ' C [ a ,b ]
s

Now, we can define a minimum for the functional:
[ ]
Let I y be a functional in Σ ⊂ C a, b
1
[   ]
Weak Local Minimum
y0 ( x ) ∈ ∑ is weak local minimum iff
∃ε >0 ∀y ∈ ∑ y-y 0                                C1 [ a , b ]
< ε ⇒ I [ y ] ≥ I [ y0 ]

Strong Local Minimum
y0 ( x ) ∈ ∑ is weak local minimum iff
∃ε >0 ∀y ∈ ∑ y-y 0                                C [ a ,b ]
< ε ⇒ I [ y ] ≥ I [ y0 ]

Question:
Strong ⇒ Weak or Weak ⇒ Strong ? Prove it !!!
The first Variation

In function theory we learned that the function has an extermum when the first derivative
is zero. In similar way we will define the first variation and prove that if the functional
reach an extermum its first variation must be zero.

If y0 ∈ ∑ ⊂ Cs1 [ a, b ]
[         ]
We will say that h ( x ) ∈ Cs a, b belong to D ( y0 ) of "allowed variations" iff
1

∃ε ∀ t < ε y0 + th ∈ ∑

In those conditions the function:
ϕ y0 , h ( t ) = I [ y0 + th ] is defined in the neighborhood of t=0

Definition:
dI [ y + th ]
ϕ ' y ,h ( 0 ) =                                   is called the first variation of y0 in the direction of h.
0
dt      t =0
We will define:
d
δ I [ h ] = I [ y0 + th ]
dt             t =0

Theorem 1
If y0 ( x ) is a weak local minimum of the functional I [ y ] , y ∈ Σ
And h ( x ) belong to D ( y0 ) of "allowed variations", and the function ϕ ( t ) = I [ y0 + th ]
can be derived around 0.
Then δ I [ h ] = 0

Proof of Theorem 1

(*)       I [ y0 ] ≤ I [ y ] ∀ y-y 0                C1 [ a ,b ]
<ε
ε
So if t <                        then the conditions are valid
h   c1 [ a ,b ]

Since        h ( x ) belong to D ( y0 ) of "allowed variations" there is δ such that
∀ t < δ y 0 + th ∈ Σ and there is a subgroup that will also make (*) valid.
Since:
ϕ ( 0 ) = I [ y0 ] ≤ ϕ ( t ) = I [ y0 + th ]
Then t=0 is a local minimum of ϕ . According to function theory ϕ ' ( 0 ) = 0 .
Reminder From Tutorial 1 (The most simple problem)

b
I [ y ] = ∫ F ( x, y ( x ) , y ' ( x ) ) dx → min
a

y ( a ) = A, y ( b ) = B, y ∈ Cs1[a, b]

Question:
Our domain is Σ = { y ∈ Cs1 [ a, b ] : y ( a ) = A, y ( b ) = B}
If we choose y0 ∈ Σ then what is D ( y0 ) of allowed variations ?

Solution
y = y0 + th ∈ Σ
y ( a ) = A, y0 ( A ) , y ( b ) = B, y0 ( b ) = B
⇒ h ( a ) = h (b) = 0
D ( y0 ) = {h ∈ Cs1 [ a, b ] : h ( a ) = h ( b ) = 0}

Theorem 2
F ( x, y , z ) ∈ C 1 ( R 3 )
y0 , h ∈ Cs1 [ a, b ]
b
δ I [ h ] = ∫ { Fy ( x, y0 ( x ) , y0 ' ( x ) ) h ( x ) + Fz ( x, y0 ( x ) , y0 ' ( x ) ) h ' ( x )} dx
a

Proof of theorem 2
The proof given should be given to each continues part. Due to linearity in a finite sum
integral the full proof is similar.
b
I [ y ] = ∫ F ( x, y0 ( x ) + th ( x ) , y0 ' ( x ) + th ' ( x ) ) dx
a

According to laybnich:
b
I ' ( t ) = ∫ { Fy ( x, y0 + th, y0 '+ th ') h ( x ) + FZ ( x, y0 + th, y0 '+ th ') h ' ( x )} dx
a

So for t=0 we get the desired function.
Euler Lagrange Theorem – Integral version
Assume:
b
I [ y ] = ∫ F ( x, y ( x ) , y ' ( x ) ) dx → min
a

y ( a ) = A, y ( b ) = B, y ∈ Cs1[a, b], F ∈ C 1 ( R 3 )

Then:
Exists a constant c such that:
b
Fz ( x, y0 ( x ) , y0 ' ( x ) ) = ∫ Fy ( t , y0 ( t ) , y0 ' ( t ) ) dt + c
a

(maybe not in the non continues points)

Proof
We will use theorem 1,2 and lemma 3.

From theorem 1 δ I [ h ] = 0
b
From theorem 1 + 2 (*) δ I [ h ] = ∫ { Fy h + Fz h '}dx = 0
a

From (*) and lemma 3:
There is a constant c that
b
Fz ( x, y0 ( x ) , y0 ' ( x ) ) = ∫ Fy ( t , y0 ( t ) , y0 ' ( t ) ) dt + c
a

(maybe not in the non continues points)
Euler Lagrange Theorem

y0 ' is not continues but in each continues part we will use Newton-laybnich.
We can write:

Fz ( x, y0 ( x ) , y0 ' ( x ) ) = Fy ( t , y0 ( t ) , y0 ' ( t ) )
d
dx

Or in short:

d
Fz = Fy
dx

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 107 posted: 1/13/2010 language: English pages: 7
How are you planning on using Docstoc?