# STEINs UNBIASED RISK ESTIMATOR (SURE) FOR OPTICAL FLOW Dr by lindahy

VIEWS: 12 PAGES: 12

• pg 1
```									 STEIN’s UNBIASED RISK
ESTIMATOR (SURE)
FOR OPTICAL FLOW

Dr. Mingren Shi
Mathematics and Computing
University of Southern Queensland

OUTLINE

What is optical ﬂow?

Overview: optical ﬂow estimation

Lucas-Kanade Method

Stein’s Unbiased Risk Estimator (SURE)

Conclusion

1
What is optical ﬂow?
Optical ﬂow is the apparent motion of the bright-
ness/intensity patterns observed when a camera is
moving relative to the objects being imaged.
Optical ﬂow equation (OFE)
• Let I = I(x1 , x2 , t) be image intensity (irradiance)
function at time t at the image point (x1 , x2 ).
• Let u = (u1 , u2 )T ≡ ( dx1 , dx2 )T be optical ﬂow.
dt    dt
• Assume that the change of intensity of a particular
point in a moving pattern with the time is very small,
dI     ∂I dx1     ∂I dx2    ∂I
i.e. 0 ≈      =         +          +      .    (0.1)
dt    ∂x1 dt     ∂x2 dt      ∂t
∂I       ∂I       ∂I
or      u1 +     u2 +    ≈ 0.        (0.1)
∂x1      ∂x2      ∂t
• But the OFE (0.1) is ill-posed. Why?
There are two variables (u1 , u2 ), but only one
equation =⇒ it has inﬁnity many solutions.

u2                   u

u1

Fig. 1: Optical ﬂow vector

2
• A constraint should be imposed on u.

How to estimate optical ﬂow?
• Many smoothing methods to estimate the optical
ﬂow.
Lucas-Kanade method
Assumption: the optical ﬂow in a neigh-
bourhood of the central pixel P is a constant.

3

3                    3

P

3

Fig. 2: 3 × 3 neighbourhood size

3
The smoothing parameter is
the neighborhood size n (for (2n + 1) × (2n + 1)
pixels).
Selection of this tuning parameter is diﬃcult
in general.
But has a crucial rule and a profound eﬀect on
the results.
• n too small =⇒ the numerical errors may be
excessively ampliﬁed.
• n too large =⇒ the solution may be not close
to satisfying the real solution.
• This is a tradeoﬀ situation.
• There is an optimal value of the n size.
• Example: Divergent tree (The optical ﬂow in
each pixel is plotted in vector, see in Fig. 1.)
image ﬁle: treed.mat
data ﬁles: Treed1.dat - 11 × 11 (optimal)
Treed2.dat - 3 × 3 (too small)
Treed3.dat - 31 × 31 (too large)
Treed4.dat - SURE curve

4
Lucas-Kanade Method
Simpliﬁed natation

∂I              ∂I            ∂I
It =     , Ix 1 =        , Ix2 =       ,
∂t             ∂x1            ∂x2     (1.0)
P = (x1 , x2 ) : (central) position or pixel

Rewrite OFE (0.1) (wrt P ) as

−It,P = u1,P Ix1 ,P + u2,P Ix2 ,P +           P
.   (1.1)

Moreover, let

gP = (Ix1 ,P Ix2 ,P )T , uP = (u1,P u2,P )T ,
yP = −It,P ,
(1.2)
and (1.1) becomes

T                                            T
yP = gP uP +           P
= sP +   P
,   (sP = gP uP ). (1.3)

The local constant LK estimator
• Assumptions
NP is a pixel neighborhood of P .
There are m = (2n + 1) × (2n + 1) pixels in NP .
A1    1, . . . ,   m
are independent.

5
A2    = ( 1, . . . ,   N
)T ∼ N (0, σ 2 I)
• Deﬁnition of the estimator

uP = arg min                        T
(yQ − gQ uP )2 .         (1.4)
Q∈NP

• Euler equations wrt the estimator
T
uP   =0⇒                (yQ − gQ uP )gQ = 0,
Q∈NP
T                                 (1.5A)
or (          gQ gQ )uP =                 yQ gQ
Q∈NP                       Q∈NP

Therefore, we have
−1                                               T
uP = MP                 yQ gQ ,    MP =            gQ gQ .
Q∈NP                           Q∈NP
(1.5B)
• Remark NP = {P1 , P2 , . . . , Pm } ⊃ {P }
gQ = (Ix1 ,Q , Ix2 ,Q )T and set
Ix1 ,P1 . . . Ix1 ,Pm
AT
2×m =                                 = [gP1 . . . gPm ],
Ix2 ,P1 . . . Ix2 ,Pm
y = −(It,P1 . . . It,Pm )T = (yP1 . . . yPm )T .
MP = AT A is a 2 × 2 matrix,                     yQ gQ = AT y.
Q∈NP
T
(1.5A) will become AT AuP = A y which is the
“Regularized equation” of AuP = y.

6
Stein’s Unbiased Risk Estimator (SURE)
• The SURE was originally used to estimate the
mean of a multivariate normal distribution.
• We use it to estimate the tuning parameters in
the context of image restoration.
Deﬁnition of SURE
• Minimising the expected value of the discrep-
ancy (E is the expectation):

R   = E(       |gP (uP − uP )|2 )
T
P

= E(       |sP − sP |2 ),          T
(sP = gP uP ),
P

= E(       |(yP − sP ) − (yP − sP )|2 )
P
(2.1)
The sum is for every pixel P in the image.
• Residual e and the error
e = (yP1 − sP1 , . . . , yPN − sPN )T ,
(2.2)
= (yP1 − sP1 , . . . , yPN − sPN )T .

• The discrepancy measure (2.1) is

R = E||e − ||2 = E||e||2 − 2E(          T
e) + E|| ||2 .
(2.3)

7
An estimator of R
(1) An unbiased estimator of E||e||2 is ||e||2 , which
can be calculated directly once the ﬂow estimate
u is obtained.
2            2
(2) Since E(              P
) = 0, E(            P
) = E(     P
) − [E(   P
)]2 ≡
σ 2 , we have
E( 2 ) =            E(       2
P
) = N · E(         2
P
) = N σ2 .
P
(3)    As usual, we must work on the middle term
T
E(       e).
Calculation of the middle term
Stein’s Lemma                      Given the Gaussian assumption
(A1 & A2) and u is weakly diﬀerentiable with re-
spect to the data y, we have
T            2     ∂e
E(           e) = σ E[trace( )].                           (2.4)
∂y
Proof      The distribution density is
p(y|u) ≡ p( ) ≡ p( 1 , . . . ,                   N
)
N
=            p( i )   by independent
i=1
N                                     2
(2.5)
=            (2πσ 2 )−1/2 exp(−             i
2σ 2   )
i=1
1
= (2πσ 2 )−N/2 exp[−( 2σ2 )|| ||2 ],

8
Furthermore, since             || ||2 = 2 ,

∂p
∂y    = p(y|u)[− 2σ2 (2 )] = − p(y|u) ,
1
σ2
(2.6)
⇒     ∂p
( ∂y )T e   =    − p(y|u) ( T e).
σ2

Thus,
T
E(        e) = [p(y|u)( T e)]dy
∂p
= −σ 2 [( ∂y )T e] dy
N
2                        ∂ei
=σ        [p(y|u)             ∂yi ]dy
i=1
(integration by parts & p(±∞) = 0)
∂e
= σ 2 [p(y|u)trace( ∂y )] dy
∂e
= σ 2 E[trace( ∂y )].
(by the deﬁnition of E)
(2.7)
The calculation of ∂e/∂y

∂eP            ∂                              ∂ sP
∂yP       =   ∂yP   (yP − sP ) = 1 −          ∂yP
∂      T             T         ∂ uP
=1−        ∂yP   (gP uP ) = 1 − gP         ∂yP
(2.8)
T ∂    −1
= 1 − gP ∂y [MP                        yQ gQ ]
P
Q∈NP
T
= 1 − gP M −1 gP . from (1.5B)

9
From the above, an unbiased estimate of the risk
is given by:
∂e
R    ≈ ||e||2 − 2σ 2 trace( ∂y ) + N σ 2
T
= [(yP − sP )2 − 2σ 2 (1 − gP M −1 gP ) + σ 2 ]
P
(by (2.2), (2.8))
=          2
[(yP − 2yP sP + s2 )
P
P
T
+2σ 2 gP M −1 gP − σ 2 ]
(2.9)
2
We can drop yP and −σ 2 since both terms do not
depend on the neighborhood size. So we can replace
R by R as follows:

R=        RP ,
P                   (2.11)
T  −1
RP = −2yP sP + s2 + 2σ 2 gP MP gP .
P

10
CONCLUSION

The Optical ﬂow equation is ill-posed. An extra
constraints should impose on the optical ﬂow vector
u

The smoothing parameter for the local constant
Lucas-Kanade method is the neighbourhood size n
which is tradeoﬀ.

Stein’s Unbiased Risk Estimator can be used to
ﬁnd the optimal value of the tuning parameter n.

11
Questions?

Comments?

Constructive Criticism?

Suggestions?

Thank you

12

```
To top