Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Summary Last Week by ixo45167

VIEWS: 11 PAGES: 36

									                Summary Last Week
• Minima and maxima
   bracketing: 3 x,y – pairs
   Golden section search : linear, factor 1.618.., analogon of
    bi-section
   parabolic interpolation: Brents method
   use of derivatives
     • When they are numerically reliable
   Multiple dimensions :
     • downhill simplex method – generalization of Golden section
       search
     • Conjugate directions: Powells method N^2 line minimizations
     • use of derivatives: Fletcher-Reeves-Polak-Ribiere
   Simulated Annealing methods
     • Traveling salesman problem – few hundred cities




      H.J. Bulten, Computational Methods 2009                        1
H.J. Bulten, Computational Methods 2009   2
• above: result for no
  penalty river.
• top right: penalty
  river crossing
• bottom: negative
  penalty
     H.J. Bulten, Computational Methods 2009   3
                      Fourier Analysis
• Many applications:
    switch to conjugate coordinates
      • time – frequency (electronics)
      • quantum physics -> coordinate – momentum space
    signal processing
      •   lock-in amplification
      •   filtering
      •   sounds : MP3, cd 44 kHz
      •   image processing.
      •   data reduction
      •   wavelets
      •   gravitational waves : can detect 10^-27 deformations by
          measuring a year.
    convolution of functions
• all thanks to fast Fourier transform (FFT)

       H.J. Bulten, Computational Methods 2009                      4
                 Fourier transformation
                                                                  
h(t )  H ( f ) , H ( f )        
                                 
                                      h(t )e 2 ift dt , h(t )    
                                                                   
                                                                        H ( f )e 2 ift df

linear:     {h(t )  g (t )}  {H ( f )  G ( f )}

 • Symmetries:
          h(t ) real  H ( f )  [ H ( f )]*
          h(t ) imaginary  H ( f )  [ H ( f )]*
          h(t ) even  H ( f )  H ( f )
          h(t ) odd  H ( f )   H ( f )
          h(t ) real, even  H ( f ) real, even
          h(t ) real, odd  H ( f ) imaginary, odd
          h(t ) imaginary, even  H ( f ) imaginary, even
          h(t ) imaginary, odd  H ( f ) real, odd

          H.J. Bulten, Computational Methods 2009                                             5
                 Transformation Laws
• scaling:                  1   f 
                  h(at )     H 
                           |a|  a 
• translation: h(t  t0 )  H  f  e 2 it f           0



                         H  f  f 0   h(t )e 2 if0t

                                     

• Convolution: g  h                 g ( )h(t   )d
                                     

                         g  h  G( f )H ( f )
                                                   
• Correlation (lag) Corr ( g , h)                  g (t   )h( )d
                                                   

                                   Corr ( g , h)  G ( f ) H * ( f )
        H.J. Bulten, Computational Methods 2009                          6
                   Fourier transformation
• Power in functions:
                                                     
              Total power:        
                                  
                                    | h(t ) |2 dt    
                                                      
                                                        | H ( f ) |2 df


• power between f and (f+df): typically one
  uses the one-sided power,       
 Ph ( f ) | H ( f ) |2  | H ( f ) |2       total power =  Ph ( f )df
• this gives a factor of two for real h(t):
                                                                          0

                             

        
       
          | H ( f ) |2 df   P( f )df  P( f )  2 | H ( f ) |2
                              0




            H.J. Bulten, Computational Methods 2009                           7
        Sampling, Nyquist frequency
• Often, h(t) is sampled in intervals delta:
               h(t )  hn      hn  h(n)
                        1
                fc 
                       2
• fc is the so-called Nyquist frequency. If a
  function is band-width limited to frequencies
  below the Nyquist frequency, then the sampled
  function h_n is an EXACT representation of the
  original function (infinite information compression)
                        
                               sin(2 f c (t  n))
           h(t )    hn
                       n         (t  n)
• If h(t) is NOT bandwidth-limited, the higher
  frequencies will be spuriously moved to lower
  frequencies.

       H.J. Bulten, Computational Methods 2009           8
             Sampling, aliasing




H.J. Bulten, Computational Methods 2009   9
              Discrete Fourier transform
 • Go from discrete number of samples hn to the
    discrete Fourier transform Hn:
                                         n        N      N
N samples hn define N frequencies f n     , n   ,...,
                                        N        2      2
                                      N 1                          N 1
H ( f n )   h(t )e   2 itf n
                                  dt   hk e   2 itk f n
                                                                 hk e 2 ikn / N  H n
                                      k 0                          k 0

 • H_n is periodic with period N; H  n  H N n
 • one usually replaces negative frequencies with
   frequencies shifted up by the Nyquist frequency
   (n runs from 0 to N instead of –N/2 to +N/2).
 • reverse Fourier transform:
           1 N 1
       hk   H n e 2 ikn / N
           N k 0
            H.J. Bulten, Computational Methods 2009                                           10
            Fast Fourier Transform
• Normal Fourier transform: N numbers hn need to
  be transformed to N numbers Hn, for each number
  H you need N multiplications. Expected O(N^2)
  matrix multiplications to transform vector h in
  vector H.
• This implies e.g. for a 1000x1000 pixel photo:
  10^12 operations.
• FFT: O(NlogN) operations (Cooley-Tukey, 60-s).
• already mentioned by Gauss, 1805.
• Danielson-Lanczos: Fourier transform of N steps
  can be written in terms of 2 Fourier transforms
  with N/2 steps each. One of even-numbered, one
  of odd-numbered points

      H.J. Bulten, Computational Methods 2009       11
                             Fast Fourier Transform
• Split Fourier transform into 2 halves:
                  N 1                                    N / 2 1                                 N / 2 1
         Fk   e                2 ijk / N
                                              fj          e          2 ik (2 j ) / N
                                                                                          f2 j     e       2 ik (2 j 1) / N
                                                                                                                                  f 2 j 1
                      j 0                                  j 0                                     j 0
             N / 2 1                                                         k N / 2 1
                                                                                                                                        
                                                                                                                                                 k
                             2 ikj /( N / 2)                      2 i / N
                       e                             f2 j  e                             e2 ikj /( N / 2) f 2 j 1  Fke  e2 i / N              Fko
               j 0                                                                j 0

• do this recursively:
                                             
                                                  k
      Fk  Fke  e 2 i / N                           Fko  Fkee  W eo Fkeo  W oe Fkoe  W oo Fkoo
       .....F eeoeoeooe  F eeoeoeooo  .......
•   Start with N=2k points                                                                  (else zero-pad the
  remainder).
• Fourier-transform of length 1: just a copy
  of the number (Feeoeoeooe = fn)

             H.J. Bulten, Computational Methods 2009                                                                                                       12
           Fast Fourier Transform
• Which number n corresponds to
  Feeoeoeoeoooeooee?
   successive divisions of the data in an even and
    an odd part test for the lowest significant bit:
    the kth index is o if n%2k =1, e if it is 0.
   reverse pattern of e and o
   set e=0, o=1
   then you have n in binary.




     H.J. Bulten, Computational Methods 2009           13
                         FFT transform
• first step: reorder array by bit-reversing:




• second step: combine 2 adjacent points:
   Fkxxx... x  Fkxxx... xe  wk Fkxxx... xo

       H.J. Bulten, Computational Methods 2009   14
                                    FFT
• repeat this
  step 2log(N)
  times, until
  the final
  FFT is
  obtained.
• Input and
  output
  definitions
  (general
  feature of
  FFT
  libraries):


        H.J. Bulten, Computational Methods 2009   15
                      FFT transforms
• Typically, complex number arithmetic is not
  implemented as efficient as real number
  arithmetic, and algorithms are fed with real
  arrays with 2N numbers instead of N complex
  numbers. If your function has a symmetry, one can
  reduce the time needed by the Fourier transform
  to fill the arrays differently. (E.g. the function is
  real: all imaginary terms are 0 )
• Better: either do 2 FFT’s at once (e.g. with
  convolution, filtering. Pack one function in the real
  numbers, the second in the imaginary numbers. To
  separate the resulting transformed complex
  functions, use FN-n=Fn*, GN-n=-Gn*.
• Alternatively: split odd and even samples, put odd
  samples in imaginary, and do the last step of the
  FFT yourself. NR::real_ft does it for you.
       H.J. Bulten, Computational Methods 2009            16
                     multiple dimensions
• The numerical FFT can be obviously extended
  from one to more dimensions, e.g. h(k1,k2) <->
  H(n1,n2) :
    h(k1 , k2 )   for k1  0,..., N1  1 k2  0,..., N 2  1
                     N 2 1 N1 1
    H (n1 , n2 )     e 
                     k2  0 k1  0
                                     2 ik2 n2 / N 2 2 ik1n1 / N1
                                                  e                 h(k1 , k2 )

    H (n1 , n2 )  FFT2{FFT1[h(k1 , k2 )]}  FFT1{FFT2 [h(k1 , k2 )]}

• lots of bookkeeping due to nested Fourier
  transforms; use a multi-dimensional FFT algorithm
  (NR::fourn).
• inverse transform: just add the factor
  1/(N1*N2*...Nx) after the transformation, and
  specify with switch that the complex factors are
  e-ic instead of e+ic
       H.J. Bulten, Computational Methods 2009                                    17
                     two dimensions
• image processing typically uses FFT
  techniques. The image pixels are presented
  as 2-dimensional functions. One could for
  instance “brighten” the whole picture by
  changing the value of H(0,0) and
  transforming back. One can sharpen the
  picture by deconvolution, or run it through
  a filter that filters out high or low
  frequency components.




     H.J. Bulten, Computational Methods 2009    18
                         Convolution
• r*s=s*r
• often, r and s are different, s signal
  (continuous data stream), r response (e.g.
  smearing, low-pass filter, ...




     H.J. Bulten, Computational Methods 2009   19
                          Convolution
• Typically, the signal s(t0) is not transferred
  exactly as a delta function r=delta(t-t0), but
  smeared by a function r(t-t0).
• Discrete case: s and r are sampled in steps Delta:




      H.J. Bulten, Computational Methods 2009          20
                               Convolution
• Hence, r0 copies the input into the output, r_1
  tells you which fraction of the input is smeared
  towards the next bin, r-1 = RN-1 gives you how much
  is put into the previous bin, etc.
• discrete convolution:                

• if r is non-zero only in                         
                                r  s  r ( ) s (t   )d
                                       
  finite range M, it is called               M /2
  a finite-impulse response (r  s ) j                s j  k rk
  (FIR)                                   k  M / 2 1

• discrete convolution theorem: if signal periodic
  with period N and FIR has length M<=N, then
        N /2

        
     k  N / 2 1
                     s j k rk  Sn Rn

         H.J. Bulten, Computational Methods 2009                     21
                          Convolution
• usually, FIR has much shorter lenght M. Treat this
  by inserting zeros.
• The index runs from 0 to N-1, just as before: s(-
  N/2)=s(N/2), s(-1)=s(N-1).
• The signal may not be periodic. Then, the result at
  s(0) is influenced by the measurement at s(N-1).
  Here, zero padding helps too. Insert after the last
  non-zero measurement at least M zero’s (M is
  duration of FIR), then the results at s(0) are not
  influenced any more. discard the results for the
  last M values of s, the padded zero’s. (They are
  determined by the measurements at s(0)...s(M-1)).



      H.J. Bulten, Computational Methods 2009           22
                          Convolution




• left: discrete convolution of non-periodic
  signal s. The regions at the end are spoiled.
• right: zero-padded result.

      H.J. Bulten, Computational Methods 2009     23
                           Convolution
• procedure:
    data, with zero padding, consists of pairs s_j, r_j,
     j=0,...,N-1
    discrete convolution is given by taking the Fourier
     transform of r and s, multiplying them term by term,
     then take the inverse Fourier transform. The result is
     (r*s)
• deconvolution: take transform of measured data
  and of response function. divide the transform of
  the measured data by the transform of the
  response function, and you obtain the deconvolved
  signal!
• if Rn = 0, this implies that the knowledge of the
  frequency component Sn is completely lost, so a
  reconstruction is not possible.

       H.J. Bulten, Computational Methods 2009                24
                          Correlation
• in correlation, the data sets are typically
  similar in shape. E.g. you search for a
  predefined signal, such as a gravitational
  wave signal in a binary collapse, in a data
  stream (matched filtering).
• Corr(g,h)(t)=Corr(h,g)(-t)
• discrete:
                           N 1
     Corr ( g , h) j   g j  k hk  FFT (Gk H k )
                                                *

                           k 0

• zero padding is analogous to the
  convolution method.
      H.J. Bulten, Computational Methods 2009         25
                         Optimal filtering
• Noise removal: signal u is modified by a response r
  and a noise contribution.
     true signal u, convolved with response r gives signal s,
      with noise n added gives measurement c: c=(u*r)+n
                  
       s (t )     r ( )u(t   )d
                  
                                         S ( f )  R( f )U ( f )

      c(t )  s (t )  n(t )
                       C ( f ) ( f )
      U( f ) U( f ) 
                           R( f )
• Phi(F) is optimal filter when U-tilde is as close as possible to
  true function U

•    |U ( f )  U ( f ) |2 df is minimized

         H.J. Bulten, Computational Methods 2009                     26
                      Optimal filtering
• signal and noise are uncorrelated, so their
  cross-product is zero:
                                  [ S ( f )  N ( f )]( f ) S ( f ) 2
   |U ( f )  U ( f ) | df =  |                           
                      2
                                                                     | df
                                             R( f )           R( f )
       |R( f ) |2 {| S ( f ) |2 |1  ( f ) |2  | N ( f ) |2 | ( f ) |2}df
• The integrand is minimized if the
  derivative to Phi(f) is zero.This leads to
  the optimal filter Phi :
                                | S ( f ) |2
               ( f ) 
                        | S ( f ) |2  | N ( f ) |2

• Noise needs to be estimated; this is usually
  possible.
        H.J. Bulten, Computational Methods 2009                                   27
                   Optimal filtering
• Noise : either measured in absence signal,
  or extrapolated under signal region.




                                     n




     H.J. Bulten, Computational Methods 2009   28
                            Wavelets
• Fourier transform: local information in time
  domain is spread to all frequencies, and vice versa.
• wavelet transformations : also a fast, linear
  operation, but wavelets are more localized.
• w(t)<->W(k); invertable and orthogonal, transpose
  of matrix is inverse of matrix.
• renders matrices sparse: information reduction.
• Daubechies wavelets, from highly localized to
  highly smooth.
• Simplest class has 4 coefficients, c0...c3.
• Matrix is given by:



      H.J. Bulten, Computational Methods 2009            29
                                  DAUB4
      c0   c1    c2   c3                                          
      c3   c2   c1   c0                                         
                                                                  
                 c0    c1   c2    c3                              
                                                                  
                 c3   c2   c1   c0                              
                                       ....                       
                                                                  
                                              c0   c1    c2    c3 
                                              c3   c2   c1   c0 
                                                                  
      c2   c3                                            c0    c1 
      c1   c0                                           c3   c2 
                                                                  
                                                                  
                                                                  
                                                                  
• odd rows: smoothing filter (running average of 4
  points)
• even rows: quadrature mirror filter; enhances local
  differences.
• matrix should be orthogonal: Transpose should be
  inverse.
      H.J. Bulten, Computational Methods 2009                          30
                               DAUB4
• This holds when:
        3

        ci2  1,
       i 0

       c2 c0  c3c1  0
• Requiring that the quadrature mirror filter has a
  zero response to a smooth function gives
  additional constraint:

        c3  c2  c1  c0  0
        c2  2c1  3c0  0
the first requirement for vanishing constant term,
  the second for vanishing linear increase

      H.J. Bulten, Computational Methods 2009         31
                Wavelet transform
• for DAUB4, this gives coefficients:

      1 3        3 3        3 3        1 3
 c0       , c1       , c2       , c3       ,
       4 2         4 2         4 2         4 2

• transformation: keep applying the matrix
  transformation, first to the vector length
  N, then to the smooth components N/2,
  then to the smooth-smooth components
  N/4, etc. (pyramidal)



     H.J. Bulten, Computational Methods 2009       32
                Wavelet transform




• coefficients D are called wavelet coeff’s,
  coefficients S are called mother functions

     H.J. Bulten, Computational Methods 2009   33
                 wavelet transform
• What do these wavelets look like?
• perform DAUB4 and DAUB20 on unit
  vectors:




     H.J. Bulten, Computational Methods 2009   34
                            wavelets
• cusps: non-smooth derivatives
• use of wavelets:
   1) to get strength of located events (e.g. a
    signal in a measurement stream)
   2) to render matrices sparse.




     H.J. Bulten, Computational Methods 2009       35
                            wavelets
• wavelet transformations give good results
  in data compression for images. Also, one
  may use wavelet transformations to render
  matrices sparse, so they are more easily
  inverted (see book).




     H.J. Bulten, Computational Methods 2009   36

								
To top