Docstoc

Hi_Osc

Document Sample
Hi_Osc Powered By Docstoc
					       U NIVERSITY   OF   C AMBRIDGE
       C ENTRE FOR M ATHEMATICAL S CIENCES
       D EPARTMENT OF A PPLIED M ATHEMATICS
           & T HEORETICAL P HYSICS




Highly oscillatory quadrature
    and its applications

          Arieh Iserles


                                              1
OUTLINE


1. Motivation
   • The modified Magnus method
   • Exponential integrators
   • Computational electromagnetics
2. Highly oscillatory quadrature in one variable
   • Asymptotic expansion
   • The Filon method
   • The Levin method
   • The Huybrechs–Vandewalle method
3. The multivariate case
4. Modified Fourier expansion
   • Modified Fourier series
   • Univariate Birkhoff series
   • Modified Fourier in a cube
   • Polynomial subtraction and the hyperbolic cross
   • Toward general multivariate theory
5. Application to spectral problems
                                                       2
Motivation
The modified Magnus method
From Airy to modified Magnus

Our point of departure: the Airy equation
                  y ′′ + ty = 0,   t ≥ 0,    y(0) = 1,   y ′(0) = 0.
Its solution,
   1


  0.5


   0


−0.5


  −1
        0    10        20     30     40     50    60     70    80      90       100


oscillates increasingly rapidly: using WKB analysis, it is easy to confirm that
             1
            −4         3
y(t) ∼ t             2t2 .
                 sin 3
                                                                            3
Airy numerics

How well will different initial-value solvers do in solving this exceedingly sim-
ple ODE? For simplicity, we look at constant-step implementations. Variable-step, error-
controlled packages display similar behaviour. The global error is


                                                                                      4th order Runge–Kutta
          0.2                                                                         We can clearly observe that the
h=1/10




           0
                                                                                      global error evolves according
         −0.2
                                                                                      to a power law. This can be
                0   50    100

                                                                                      confirmed using WKB analysis
          0.2
          0.1                                                                         and general formulaæ for the
h=1/20




           0
         −0.1
         −0.2
                                                                                      evolution of global error, based
                0   50    100         150         200                                 on     the                  ¨
                                                                                                   Alekseev–Grobner
          0.2

          0.1
                                                                                      Lemma [AI]. Specifically,
h=1/40




           0
                                                                                                             13
         −0.1                                                                             y n − y (tn) ∼ O t 4    .
         −0.2
                0    50         100         150         200   250   300   350   400




                                                                                                                  4
Gauss–Legendre RK
The 4th-order classical RK has                 0.1

                                              0.05




                                    h=1/10
poor stability properties.                      0

                                             −0.05
So, what about 4th-order                      −0.1
                                                     0   50    100
Gauss–Legendre Runge–Kutta:
it is A-stable, symplectic. . . .             0.05




                                    h=1/20
                                                0


                                             −0.05
This,   however,   doesn’t help
                    13                               0   50    100         150         200

much.     The O    t4     rate of             0.04

                                              0.02
increase in global error is true
                                    h=1/40


                                                0

now as well!                                 −0.02

                                             −0.04
                                                     0    50         100         150         200    250   300   350       400


                                                                                          13
As a matter of fact, it is possible to prove that the O                                  t4        rate of global-error
accumulation is universal to all 4th-order B-series methods [AI]. This analysis
can be extended to harmonic oscillators of the form y ′′ = a(t)y, and even to
nonlinear oscillators like y ′′ + ty 3 = 0 [Niesen].

                                                                                                                      5
The Magnus method
What to do? Herewith the global error for the 4th-order Magnus integrator:
                         −6         h=1/10                       −9         h=1/40
                     x 10                                    x 10
                 2                                       5

                 1

                 0                                       0

               −1

               −2                                       −5
                     0        200      400   600             0        200        400   600


               −6                                       −6

               −8                                       −8

               −10                                     −10

               −12                                     −12
                     0        200      400   600             0        200        400   600




                                                                             1
            Global error: absolute (top row) and scaled by t− 4 (bottom row).

                                              1
The error increases only as O                t4    for a long time. Indeed, all this can be
proved (again, also for more general harmonic oscillators): Magnus accumu-
lates error much slower!
                                                                                             6
To remind you of the Magnus method. . . Given the linear ODE system

                   y ′ = A(t)y ,   t ≥ 0,    y (0) = y 0,
we represent its solution in the form y (tn+1) = eΩ(h)y (tn), where tn = nh.
Then
                   ∞
            ′=         Bk k
          Ω                adΩA(tn + t), t ≥ 0,       Ω(0) = O,
                  k=0
                        k!

where ad0 B = B, adk+1B = [Ω, adk B] and Bk is the kth Bernoulli num-
         Ω             Ω              Ω
ber [Haussdorf]. The function Ω has the Magnus expansion
         t                  t x
Ω(t) =   A(tn + x)dx − 21       [A(tn + y), A(tn + x)]dydx
       0                   0 0
             t x y
       +41         [[A(tn + z), A(tn + y)], A(tn + x)]dzdydx
            0 0 0
              t x x
          1
       + 12          [A(tn + z), [A(tn + y), A(tn + x)]]dzdydx + · · · .
             0 0 0



                                                                       7
General analysis of Magnus integrators, their order, implementation and com-
putation of multivariate integrals has been developed by AI & Nørsett. Magnus
is an example of a Lie-group integrator, but this is of little importance to the
present argument.

Magnus integrators are effective for highly oscillatory problems and the rea-
son is important. Why are hi-osc problems difficult? Because their derivatives
increase rapidly: their amplitude is scaled by frequency. And the error of stan-
dard methods scales like a high-order derivative of the solution! However, the
Magnus expansion consists of multivariate integrals of ’nice’ functions – and
the amplitude of the integrals doesn’t increase! Hence we can expect the rate
of convergence of the expansion to be unaffected by oscillation.

But we can do much better! Suppose that we can make the integrands os-
cillate. Then the size of the integrals will be divided by frequency! Hence we
can expect the rate of convergence of the expansion to be accelerated by os-
cillation: An example of a situation when, once we understand high oscillation,
we can turn it from an enemy into a friend.
                                                                           8
Modified Magnus

We again solve y ′ = A(t)y . The main idea is to approximate the problem
locally by a linear ODE with constant coefficients and correct.

                                              ˜           1
Thus, we approximate A(t) in [tn, tn+1] by A = A(tn + 2 h). The exact
                                                        ˜
solution of x′ = Ax, x(tn) = y (tn) being x(t) = e(t−tn)Ay (tn), we set
                 ˜
                                        ˜
                           y n+1 = ehAΘ(h)y n,
where
                                                      ˜             ˜ ˜
Θ′ = B(t)Θ,      Θ(0) = I,       with       B(t) = e−tA[A(tn + t) − A]etA.
                                      ˜
Suppose that all the eigenvalues of A are pure imaginary. Then the terms of
    ˜
e±tA oscillate rapidly, hence so do the entries of B. We solve the new matrix
linear system with Magnus.


                                                                        9
The global error is now

            −9                                                                                                                      −9
     x 10                                                                                                                    x 10
 1                                                                                                                       1

 0                                                                                                                       0

−1                                                                                                                      −1

     0              200             400             600             800          1000      1200       1400      1600                     1800      2000       2200      2400       2600      2800       3000      3200

            −9                                                                                                                      −9
     x 10                                                                                                                    x 10
 1                                                                                                                       1

 0                                                                                                                       0

−1                                                                                                                      −1

         3400             3600            3800            4000            4200      4400       4600      4800       5000 5000               5200       5400      5600       5800      6000       6200      6400        6600

            −9                                                                                                                      −9
     x 10                                                                                                                    x 10
 1                                                                                                                       1

 0                                                                                                                       0

−1                                                                                                                      −1

             6800            7000            7200            7400            7600       7800      8000       8200             8400              8600      8800       9000      9200       9400      9600       9800      10000



                                                                                                          1
                                                                                                         −4
Note that the error decays like t                                                                               , i.e. at the same speed as the exact solu-
tion!
                                                                                                                                                                                                                  10
The fact that global error with modified Magnus matches the rate of decay of
the exact solution for harmonic oscillators is universal [AI].

Similar ideas, based on different modifications of Magnus, have been devel-
oped by Degani & Schiff and by Hochbruck & Lubich.

But. . . all this is true only as long as the integrals are evaluated exactly – as
possible for Airy. Once we attempt the same Gauss–Legendre-based quadra-
ture as used for other Magnus integrators, the performance is considerably
worse. To be able to implement methods like modified Magnus and similar ex-
pansions in iterated integrals (e.g. the Neumann expansion) we must be able
to handle effectively highly oscillatory (multivariate) integrals.




                                                                             11
Exponential integrators

[There are many exponential integrators – indeed, many exponential integrators for highly
oscillatory ODEs [Grimm & Hochbruck]. This is not a review of exponential integrators, just a
discussion of one approach, based on the dissertation (in progress) of M. Khanamirian.]


Stage I: We commence from the linear ODE
                    y ′ = Ay + f (t),       t ≥ 0,        y (0) = y 0,
where σ(A) ⊂ iR, A ≫ 1. Thus, the solution of the ODE oscillates rapidly.
The exact solution is, of course,
                                                t
                        y (t) = etAy 0 +            e(t−x)Af (x)dx,
                                                0
and this motivates the initial-value integrator
                                            h
                   y n+1 = ehAy n +             e(h−x)Af (tn + x)dx.
                                           0
Note that the integral above oscillates rapidly!!!
                                                                                       12
Stage II: Consider the system
                  y ′ = A(t)y + f (t),       t ≥ 0,      y (0) = y 0.
Then
                                              t
                    y (t) = Φ0(t)y 0 +            Φ0(t − x)f (x)dx,
                                             0
where
                     Φ′ = A(t)Φa,
                      a                   t ≥ a,        Φ(a) = I.
Obviously, we can replace Φ by a truncated Magnus expansion [Hochbruck,
Thalhammer].

The outcome is a time-stepping method with arbitrarily high order of asymp-
totic decay which, at the same time, has good classical order.

All this can be done even better with modified Magnus.


But note again: all this works only as long as we can deal with highly oscillatory
integrals!
                                                                             13
Stage III: Finally, we consider
               y ′ = A(t)y + f (t, y ),          t ≥ 0,    y (0) = y 0,
where f is ‘small’. The variation of constants formula
                                           t
               y (t) = Φ0(t)y 0 +              Φ0(t − x)f (x, y (x))dx,
                                          0
results in
                                  h
       y n+1 = Φtn (h)y n +           Φtn (h − x)f (tn + x, y (tn + x))dx,
                                0
but this, on the face of it, is useless, since the unknown y features inside the
integral. Not so!

We propose to use waveform relaxation,

    y [0](tn + x) ≡ y n,
                                       h
 y [s+1](tn + x) = Φtn (t)y n +            Φtn (h − x)f (tn + x, y [s](tn + x))dx,
                                      0
hence need to compute highly oscillatory integrals in each iteration.
                                                                              14
Computational electromagnetics

The scattering of a sound wave off an object is typically modelled by solving
the Helmholtz equation

                          ∆u + k2u = 0,      x ∈ Ω,
subject to the boundary conditions
                                        ∂u(x)
      u(x) = f (x),       x ∈ ∂Ω1      and      = 0, x ∈ ∂Ω2,
                                          ∂n
where ∂Ω = ∂Ω1 ∪ ∂Ω2. This can be reduced to solving an integral equation
of the form
               i     (1)
                   H0 (k x − y )q(y )dsy = u(x)
              4 ∂Ω
in R2 or, in R3,
                        1   eik x−y
                                    q(y )dsy = u(x).
                       4π ∂Ω x − y
       (1)
Here H0      is the Hankel function.
                                                                        15
                                                        i
It is known that for an incoming wave ui(x) = ui (x)eikg (x) the solution has
                                               s
the form
                           q(x) = qs(x)eikg i (x) ,

where ui(x), g i(x) and qs(x) are asymptotically nonoscillatory.

We compute qs(x) by collocation. Set
                                       r
                            qx(t) =         ck ψk (t),
                                      k=1
where ψ1, ψ2, . . . , ψr are linearly independent basis functions. We collocate
at points xk = κ(tk ), where κ : [0, 1] → ∂Ω is parametrization fo the bound-
ary. Thus, for example, in R2, we obtain a linear system where each compo-
nent is a highly oscillatory integral of the form
    i 1 (1)                  ik[g i (κ(t))−g i(κ(tn))] ∇κ(t) ψ (t)dt.
        H1 (k κ(tn) − κ(t) )e                                 k
    4 0

                                                                          16
Highly oscillatory integration in one variable
We commence from the computation of
                                    b
                        I[f ] =         f (x)eiωg(x)dx,         ω ≫ 1,
                                   a
where f and the oscillator g are given. The standard integration with interpo-
latory quadrature (e.g. Gauss–Legendre) is useless: For ω ≫ 1 the error is
O(1), virtually indistinguishable from using a random-number generator.
The reason is that eig(x) oscillates so rapidly round the unit circle that its samples at Gaussian
points are, to all intents and purposes, random number.

Asymptotic expansion
Let us assume first that g ′ = 0 in [a, b]. Integrating by parts,
                   1 b f (x) deiωg(x)
          I[f ] =        ′ (x)
                                        dx
                  iω a g           dx
                   1 iωg(b) f (b)      iωg(a) f (a) − 1 I[d(f /g ′ )/dx].
                =    e         ′ (b)
                                     −e
                  iω         g               g ′(a)   iω

                                                                                            17
We iterate the expression and this results in the asymptotic expansion
                                                                            
              ∞
                     1     
                                     eiωg(a)              eiωg(b)
   I[f ] ∼             m+1
                                    σm(a) − ′     σm(b) ,                           ω ≫ 1,
             m=0 (−iω)       g ′(a)         g (b)
where
                                                 d σm(x)
        σ0(x) = f (x),                 σm+1(x) =           ,                     m ∈ Z+.
                                                 dx g ′(x)
Note that
                          g ′′
                           1 ′                           3g ′′2 − gg ′′′        g ′′ ′  1 ′′
σ0 = f,        σ1 = − 2 f + ′ f ,               σ2 =                       f − 3 3f + 2f
                     g′    g                                  g ′4              g′     g′
and so on: each σm is a linear combination of f, f ′, . . . , f (m).

All this leads to our first method, the asymptotic quadrature
                                                                                    
                      s−1
                             1                 eiωg(a)               eiωg(b)
         QA[f ] =
          s                    m+1
                                           σm(a) − ′     σm(b) .
                     m=0 (−iω)       g ′(a)         g (b)
Asymptotic quadrature requires just the values of f and its first s − 1 derivatives at the end-
points and it has an asymptotic order s: the absolute error decays like O ω −s−1 .
                                                                                           18
Stationary points
Suppose first that g ′(ξ) = 0, g ′′(ξ) = 0, for some ξ ∈ (a, b) and that g ′ = 0
elsewhere.

Naive integration by parts breaks down, since division by g ′ introduces a polar
singularity. An alternative is the method of stationary phase [Cauchy, Stokes,
Kelvin], except that, while requiring nasty contour integration, it falls short of
delivering all the information we need. Instead, let
                                         b
                            µ0(ω) =          eiωg(x)dx
                                        a
and, representing I[f ] = I[f (ξ)] + I[f ( · ) − f (ξ)],
                          1 b f (x) − f (ξ) deiωg(x)
    I[f ] = f (ξ)µ0(ω) +             ′ (x)
                                                        dx
                         iω a      g              dx
                          1 iωg(b) f (b) − f (ξ)       iωg(a) f (a) − f (ξ)
          = f (ξ)µ0(ω) +      e                      −e
                         iω               g ′(b)                  g ′(a)
                1 b d f (x) − f (ξ) iωg(x)
            −                           e        dx.
               iω a dx      g ′(x)

                                                                              19
We continue by induction. Letting
                                              d σm(x) − σm(ξ)
      σ0(x) = f (x),       σm+1(x) =                  ′ (x)
                                                              ,   m ∈ N,
                                              dx    g
we have
               ∞
                   σm(ξ)
I[f ] ∼ µ0(ω)           m
              m=0 (−iω)
           ∞
                 1     iωg(a) σm(a)−σm (ξ) − eiωg(b) σm (b)−σm(ξ) .
       +              e
         m=0 (−iω)m+1             g ′(a)                  g ′(b)



The van der Corput lemma ⇒ µ0(ω) = O ω −1/2 .
Therefore, using the first s derivatives at a, ξ and b gives an asymptotic method
                                  −s− 3
with an asymptotic error of O ω       2   .


Easy generalisation to g ′(ξ) = · · · = g (q)(ξ) = 0, g (q+1)(ξ) = 0, q ≥ 1, as
well as to several stationary points.
                                                                           20
Filon-type quadrature

A superior alternative to asymptotic quadrature is Filon-type quadrature [AI &
Nørsett].
[The grand-daddy of the idea is due to Georges Napoleon Louis Filon, further developed by
Flynn – and then misunderstood for tens of years. . . .]


Let
                               a = c1 < c 2 < · · · < c ν = b
be given nodes and m1, . . . , mν ∈ N their weights. We choose ϕ as the Her-
mite interpolating polynomial s.t.

          ϕ(j)(ck ) = f (j)(ck ),         j = 0, . . . , mk − 1,       k = 1, . . . , ν,
and set
                                       QF[f ] = I[ϕ].
                                        s
                                                     b k iωg(x)
[The underlying assumption is that the moments       a
                                                       x e      dx   can be computed explicitly:
This is an important restriction!]
                                                                                           21
THEOREM If m1 = mν = s then

                  QF[f ] − I[f ] ∼ O ω −s−1 ,
                   s                                ω ≫ 1.


Proof Since QF[f ] − I[f ] = I[ϕ − f ] and
             s

    ϕ(j)(a) = f (j)(a),     ϕ(j)(b) = f (j)(b),      j = 0, 1, . . . , s − 1,
the proof follows from the asymptotic expansion of I[ϕ − f ].                    2

Filon-type methods have a number of advantages over asymptotic quadrature.
Firstly, they are effective also for small ω.
Secondly, addition of extra nodes in (a, b), while not changing asymptotic er-
ror, typically greatly increases precision.
Thirdly, they scale up much more easily to multivariate setting.



                                                                            22
Filon with stationary points

The Filon method scales up easily to any number of stationary points of any
degree. The principle of this generalisation is important, because it also ap-
plies to multivariate setting.

The idea is that, to attain asymptotic order O ω −α , the interpolation poly-
nomial must at least share the same derivative information as the asymptotic
expansion of the same asymptotic order.

Thus, suppose stationary points ξ1, . . . , ξr ∈ (a, b) of degrees m1, . . . , mr
resp. We choose nodes a = c1 < c2 < · · · < cν = b, where ν ≥ r + 2 and

                      ξ1, ξ2, . . . , ξr ∈ {c2, c3, . . . , cν−1}.
To attain asymptotic order O ω −s−1 we need to interpolate to multiplicity s
at the endpoints and to multiplicity mk s at ξk , k = 1, . . . , r.

                                                                            23
                                                         1
      Without stationary points:                                  cosh x eiωxdx                              ω s+1 × |error| :
                                                         0
                                              1.2
                                                                                               0.0008


      1.24                                      1
                                                                                               0.0007



                                                                                               0.0006
s=1




                                              0.8
       1.2

                                                                                               0.0005
                                              0.6

                                                                                               0.0004
      1.16

                                              0.4
                                                                                               0.0003



      1.12                                    0.2                                              0.0002



             40   80       120   160   200          40       80       120   160   200                   40     80       120   160        200
                       ω                                          ω                                                 ω

                       QA
                        1                       QF : c = [0, 1], c = [1, 1]
                                                 1                                      QF : c = [0, 1 , 2 , 3 , 1], c = [1, 1, 1, 1, 1]
                                                                                                         1
                                                                                         1           4       4

                                              0.2
       2.5
                                                                                              0.00004


                                             0.16

         2

                                                                                              0.00003
s=2




                                             0.12


       1.5

                                                                                              0.00002
                                             0.08



         1
                                             0.04
                                                                                              0.00001




             40   80       120   160   200          40       80       120   160   200                   40     80       120   160        200
                       ω                                          ω                                                 ω

                       QA
                        2                       QF : c = [0, 1], c = [2, 2]
                                                 2                                      QF : c = [0, 4 , 1 , 4 , 1], c = [2, 1, 1, 1, 2]
                                                                                                             3
                                                                                         2
                                                                                                     1
                                                                                                         2




                                                                                                                                    24
With stationary points:
                     1                                                                            3
  I[f ] =                cosh x eiωx(1−x)dx :                             1
                                                              r = 1, ξ1 = 2 ,                    ω2       × |error| :
                 0

                                        0.016
0.65
                                                                                  0.00005


 0.6                                    0.014
                                                                                  0.00004


0.55
                                        0.012
                                                                                  0.00003

 0.5

                                         0.01                                     0.00002

0.45


                                        0.008                                     0.00001

 0.4


                                                                                        0
       40   80       120    160   200           40   80       120   160   200               40   80        120   160        200
                 ω                                        w                                           ω




                 QA
                  1                     QF : c = [0, 1 , 1], m = [1, 1, 1] QF : c = [0, 1 , 2 , 3 , 1], m = [1, 1, 1, 1, 1]
                                         1           2                      1           4
                                                                                            1
                                                                                                4




Note that QA and QF use the same information: Function values at −1, 0, 1.
           1      1


                                                                                                                       25
Levin quadrature

An alternative to Filon-type methods is Levin-type quadrature [Levin, S. Olver].

Suppose that we can solve the ODE
                            y ′ + iωg ′(x)y = f (x).
Then
  d
    [y(x)eiωg(x)] = f (x)eiωg(x)     ⇒    I[f ] = y(b)eiωg(b) − y(a)eiωg(a).
 dx
Although in general the solution of the ODE is unknown, we can approximate
it by collocation.

Consider again nodes a < c1 < c2 < · · · < cν = b, multiplicities m1, . . . , mν
and choose basis functions {ψ0, . . . , ψr }, where r = l ml − 1. We seek
α0, . . . , αr such that
                                          r
                               ˜
                        y(x) ≈ y (x) =         αk ψk (x).
                                         k=0

                                                                           26
Specifically, we impose the collocation conditions
 dj ′
   j
     [˜ (ci) + iωg ′(cl )˜(ci) − f (ci)] = 0,
      y                  y                      j = 0, . . . , mi − 1, i = 1, . . . , ν,
dx
whence

         QL[f ] = y (b)eiωg(b) − y (a)eiωg(a) = I[f ] + O ω −s−1 ,
          s       ˜              ˜
provided that the basis is a Chebyshev set.

Advantage: We don’t need moments. Instead, we need to solve a small linear
system.
Disadvantage: The error is typically larger than with Filon-type methods –
but this can be sorted out with a clever choice of the basis: essentially, use
ψk = σk [S. Olver]. More serious is the fact that Levin-type methods don’t
work in the presence of stationary points.
Advantage: Sheehan Olver’s work demonstrates that it is possible to imple-
ment generalisations of Levin methods with stationary points and to use them
to solve highly oscillatory ODEs!
                                                                                 27
The Huybrechs–Vandewalle quadrature
(The method of numerical steepest descent.)

Let us consider for simplicity
                                        b
                              I[f ] =       f (x)eiωxdx.
                                        a
Assume that f is analytic in an open domain inclusive of
                         {z ∈ C : Re z ∈ [a, b], Im z ≥ 0}
and consider the contour integral along
               a + i∞ t                                    t   b + i∞
                                                            6
                                            I3



                    I4                                      I2



                         ?
                                            I1             -t
                     a   t                                      b
                                                                         28
By the Cauchy theorem
                                I1 + I2 + I3 + I4 = 0.
Moreover, I1 = I[f ] and I3 = 0. Hence
                                   ∞                                  ∞
I[f ] = −I2 − I4 = −eiωb               f (b + it)e−ωtdt + eiωa            f (a + it)e−ωtdt.
                                  0                                 0
We compute the integrals on the right with classical Gauss–Laguerre quadra-
ture: using ν points we obtain asymptotic order ν [Huybrechs & Vandewalle].

All this can be scaled up. Firstly, for general oscillator g without stationary
points we need to choose a more elaborated complex path. Secondly, in the
presence of stationary points, the path must also cross them at an appropriate
angle and we need to use generalized Gauss–Laguerre. Finally, everything
generalizes to a multivariate setting.

[As things stand, we have three powerful techniques: Filon-type, Levin-type and Huybrechs–
Vandewalle. None is universally better: in different situations different approaches are supe-
rior and much further work remains to be done. Hot off the press: Recently Sheehan Olver
proposed a Filon-like technique that doesn’t require moments.]
                                                                                        29
Multivariate quadrature
Let Ω ⊂ Rd be a domain with smooth boundary and let f, g : Rd → R be
smooth functions. We are concerned with the computation of

                            I[f ; Ω] =         f (x)eiωg(x)dV.
                                           Ω
We need to distinguish three types of critical points, which determine the
asymptotics of I:


 1. Stationary points ξ ∈ cl Ω s.t. ∇g(ξ ) = 0;

 2. Resonance points ξ ∈ ∂Ω s.t. ∇g(ξ ) is normal to the boundary;

 3. Vertices ξ ∈ ∂Ω where a parametric representation of the boundary is
    non-smooth.

[In one variable vertices correspond to the endpoints and there are no resonance points: the
latter correspond to lower-dimensional stationary points “hiding” on the boundary.]
                                                                                      30
The path to happiness leads through an asymptotic expansion – and asymp-
totic expansion in univariate world required integration by parts. In the present
setting we assume w.l.o.g. that Ω is simply connected and use the Stokes
Theorem. [We henceforth concentrate on the case when there are no stationary points:
theory for stationary points exists but is much more complicated.] After much algebra, it
results in
                          1                  f (x )
            I[f ; Ω] =         n⊤(x)∇g(x)           eiωg(x)dS
                         iω ∂Ω              ∇g(x) 2
                             1       f (x )
                          −      ∇⊤          ∇g(x) eiωg(x)dV,
                            iω Ω    ∇g(x) 2
where n(x) is the outward unit normal at x ∈ ∂Ω. Iterating the above yields
                    ∞
                          1                   σm(x) iωg(x)
   I[f ; Ω] ∼ −             m+1 ∂Ω
                                   n⊤(x)∇g(x)       2
                                                      e    dS,
                  m=0 (−iω)                   ∇g(x)
where
                                              σm−1
                σ0 = f,        σ m = ∇⊤           2
                                                    ∇g ,       m ∈ N.
                                              ∇g

                                                                                   31
Thus, we expand I[f, Ω] in lower-dimensional integrals, a procedure that can
be repeated d times, whence
                                        ∞
                                             1
                          I[f ; Ω] ∼              Θn[f ],
                                     n=0 (−iω)n+d

where each Θn is a linear functional which depends on ∂ |m|f /∂ xm, |m| ≤ n,
at the vertices of Ω. [Note that we are implicitly assuming that there are no resonance
points in any dimensions, otherwise they must be added to the calculation and the plot thick-
ens.]


In principle, we can try to work out explicit values of Θn, but this is difficult and
these values are quite messy. But this is not necessary!

The important information is not the exact form of Θn but the nature of deriva-
tives required to form it! Once it is known, we can easily deduce (like in the
univariate case) the order of Filon-type methods [AI & Nørsett]. Levin-type
methods are slightly more complicated, but the same principle ultimately pre-
vails [S. Olver].
                                                                                       32
To form a multivariate Filon-type method, we interpolate f and its derivatives
at the vertices to requisite order, and perhaps also at other points in Ω, by a
multivariate polynomial: the technique is familiar from finite elements, except
that now we don’t need ‘small’ partition: the larger (and fewer) elements, the
better!

Levin-type methods can be generalized using very similar approach.

An unwelcome feature of multivariate quadrature is the presence of resonance
points. Indeed, if ∂Ω is smooth then there must be resonance points and this
makes the calculation of moments exceedingly difficult, also creates problems
in the asymptotic expansion once we descend down the dimension. [Our tech-
nique of proof is not to blame – asymptotic behaviour is genuinely different!]


There are several ad hoc ideas to deal with resonance points. The Huybrechs–
Vandewalle method can also be used in this setting: as things stand, this is
the approach which, in the presence of resonance points, requires least extent
of tweaks.
                                                                                 33
Modified Fourier expansion
Modified Fourier series
Let f be an analytic and periodic function in [−1, 1] and set
                 1                                 1
        ˆC
        fn =         f (x) cos πnxdx,      ˆD
                                           fn =        f (x) sin πnxdx.
                −1                                −1
The Fourier expansion:
                              ∞
                f ∼ 1 f0 +
                    2
                      ˆC            ˆC           ˆD
                                   (fn cos πnx + fn sin πnx).
                             n=1


  • Pointwise convergence of the expansion in [−1, 1] at exponential speed:
     ˆC ˆD
    |fn |, |fn | ≤ ce−αn for some α, c > 0 and all n ∈ Z+.

  • Approximating Fourier integrals by the Discrete Fourier Transform, we
    commit an exponentially small error.

  • The first m DFT coefficients can be calculated by Fast Fourier Transform
    in O(m log2 m) operations.
                                                                          34
THE BAD NEWS
Suppose that f is no longer periodic. Then

 • Pointwise convergence occurs in (−1, 1), while the Gibbs phenomenon
   and breakdown in convergence occur at the endpoints.

 • Fourier coefficients decay very slowly: |fn |, |fn | ∼ O n−1 .
                                           ˆC ˆD


 • An m-point DFT commits an O m−2 error.


AN ALTERNATIVE: MODIFIED FOURIER EXPANSION
We replace sin πnx with sin π(n − 1 )x and consider the expansion
                                  2
                          ∞
            f ∼ 1 f0 +
                2
                  ˆC           [fn cos πnx + fn sin π(n − 1 )x],
                                ˆC           ˆS
                                                          2
                         n=1
where
             1                                 1
     ˆC
     fn =        f (x) cos πnxdx,      ˆS
                                       fn =        f (x) sin π(n − 1 )xdx.
                                                                   2
            −1                                −1

                                                                             35
Why is this interesting?
  • The set {1} ∪ H1, where
                                                 1
             H1 = {cos πnx : n ≥ 1} ∪ {sin π(n − 2 )x : n ≥ 1}
    is orthonormal and dense in L2[−1, 1].

  • Modified Fourier coefficients decay faster:
                           fn , fn ∼ O n−2 ,
                           ˆC ˆS                 n ≫ 1.
  • Suitable versions of classical Fejer and de la Vallee Pouissin theorems
                                       ´               ´
    hold in this setting and it is possible to prove O m−2 convergence in
    (−1, 1) and O m−1 convergence at the boundary.

  • Instead of discretizing with DFT (or DCT and DST), we approximate inte-
    grals by highly oscillatory quadrature techniques. This allows us to eval-
    uate the first m coefficients to very high precision in O(m) operations.

  • Low-frequency coefficients can be computed by “exotic quadrature”, using
    the same data as highly oscillatory quadrature.

                                                                         36
COMPLETENESS AND CONVERGENCE

THEOREM (Krein) The set H1 is an orthonormal basis of L2[−1, 1].

     ´
A Fejer-type convergence theorem: Let
                             m
        Sm[f ](x) = 1 f0 +
                    2
                      ˆC           [fn cos πnx + fn sin π(n − 1 )x],
                                    ˆC           ˆS
                                                              2
                             n=1
                    1 m
        σm[f ](x) =       Sn[f ](x).
                    m n=1
THEOREM (AI & Nørsett) Let f be Riemann integrable in [−1, 1]. At every
x ∈ [−1, 1] where f is Lipschitz it is true that

                         lim σ [f ](x) = f (x).
                        m→∞ m




                                                                       37
            ´
A de la Vallee Pouissin-type theorem:

THEOREM (AI & Nørsett) Suppose that f is Riemann integrable and that

                          fn , fn = O n−1 ,
                          ˆC ˆS                        n ≫ 1.
If f is Lipschitz at x ∈ (−1, 1) then Sm[f ](x) → f (x). If, in addition,
f ∈ C[α, β], where −1 < α < β < 1, then this progression to limit is uniform
in [α, β].

Asymptotic convergence theorem:

THEOREM (S. Olver) Let f ∈ C2[−1, 1], such that f ′′ ∈ BVP[−1, 1]. Then
m-term modified Fourier approximation converges like O m−2 in (−1, 1)
and like O m−1 at the endpoints.

Proof by using asymptotic expansions and Lerch functions.


                                                                       38
ASYMPTOTICS

Integrating by parts,
                  n                          1
       ˆC = (−1) [f ′(1) − f ′ (−1)] − 1
       fn                                      f ′′(x) cos πnxdx.
             n2π 2                    n2π 2 −1
Iterating this expression, we obtain the asymptotic expansion
                    ∞
                       (−1)k
   fn ∼ (−1)n
   ˆC                         [f (2k+1)(1) − f (2k+1)(−1)] = O n−2
                 k=0
                     (nπ)2k+2
for n ≫ 1. Likewise,
                         ∞
                              (−1)k
    fn ∼ (−1)n−1
    ˆD                               [f (2k)(1) − f (2k)(−1)] ∼ O n−1 ,
                        k=0
                            (nπ)2k+1
                        ∞
                                (−1)k
    fn ∼ (−1)n−1
    ˆS                                        [f (2k+1)(1) + f (2k+1)(−1)]
                        k=0 [(n − 1 )π]2k+2
                                  2
        ∼ O n−2 .
This explains why
                    fn ∼ O n−1 ,
                    ˆD                   fn , fn ∼ O n−2 .
                                         ˆC ˆS

                                                                         39
                                        Absolute errors
Classical Fourier
   m = 100 0.04                                       m = 200            0.02




                   0.02                                                  0.01




                      0                                                     0
  -0.8   -0.4             0   0.4         0.8         -0.8    -0.4              0   0.4       0.8
                                    x                                                     x


                  -0.02                                                 -0.01




                  -0.04                                                 -0.02




Modified Fourier
                                                                      0.00008


   m = 100       0.0002
                                                      m = 200
                 0.0001                                               0.00004


                                    x
  -0.8   -0.4             0   0.4         0.8
                      0
                                                                            0
                                                      -0.8    -0.4              0   0.4       0.8

                -0.0001                                                                   x



                                                                     -0.00004
                -0.0002




                -0.0003




                                        f (x) = ex − cosh 1


                                                                                                    40
Computing modified Fourier coefficients
An obvious point of departure: truncation of asymptotic expansions. This re-
sults in the asymptotic method
                      s−1
                            (−1)k
  AC [f ] = (−1)n
  ˆs,n                             [f (2k+1)(1) − f (2k+1)(−1)],
                      k=0
                          (nπ)2k+2
                        s−1
                                 (−1)k
  AS [f ] = (−1)n−1
  ˆs,n
                                  1
                                              [f (2k+1)(1) + f (2k+1)(−1)].
                        k=0 [(n − 2 )π]2k+2
Therefore


  • Once we precompute f (2k+1)(±1) for k = 0, . . . , s − 1, computing the
    coefficients AC [f ] and AS [f ] for n ≤ m takes O(m) flops.
                ˆs,n        ˆs,n

  • It is true that
         AC [f ] ∼ fn + O n−2s−2 ,
         ˆs,n      ˆC                         AS [f ] ∼ fn + O n−2s−2 .
                                              ˆs,n      ˆS

  • All this doesn’t apply to small n, before the onset of asymptotic behaviour.

                                                                           41
                                                                                                            0.06
0.35


                                                      0.125                                                 0.05
 0.3



0.25                                                    0.1                                                 0.04



 0.2
                                                      0.075                                                 0.03


0.15
                                                       0.05                                                 0.02


 0.1

                                                      0.025                                                 0.01

0.05


                                                        0.0                                                  0.0

       2   4   6    8   10   12   14   16   18   20           2   4   6   8   10   12   14   16   18   20          2   4   6   8   10   12    14   16   18   20




                     ˆs,n      ˆC                        ˆs,n      ˆS
Scaled errors n2s+2 |AC [f ] − fn | (circles) and n2s+2 |AS [f ] − fn | (crosses) for s = 1, 2, 3
and f (x) = ex .


                     Cosine terms                                                                        Sine terms
   n               s=1     s=2                                s=3                    n                 s=1     s=2                                 s=3
   1       −2.19−02                     2.22−03           −2.25−04                   1             3.61−01         −1.46−01                   5.93−02
   2       −1.47−03                    −3.73−05            9.44−07                   2            −5.99−03          2.70−04                  −1.21−05
   3       −2.95−04                     3.31−06           −3.73−08                   3            −7.98−04         −1.29−05                   2.10−07
  10        2.41−06                    −2.44−09            2.47−12                  10            −3.89−06          4.36−09                  −4.90−12

                ˆs,n      ˆC                   ˆs,n      ˆS
Absolute errors AC [f ] − fn (on the left) and AS [f ] − fn for f (x) = ex .
                                                                                                                                                         42
It is easy to generalise to Filon methods, except that in our specific case we
know that the asymptotic formula requires only odd derivatives. Therefore, we
need to interpolate only to odd derivatives!

Leaving the choice of c2, . . . , cν−1 arbitrary (for the time being), we let ϕ be a
polynomial s.t.
     ϕ(2i)(ck ) = f (2i+1)(ck ),         k = 1, . . . , ν,   i = 0, . . . , mk − 1,
and set
                                                 x
                            p(x) = f (0) +           ϕ(ξ)dξ.
                                                0
Then p(2i+1) = f (2i+1) at the nodes and we obtain the Filon-type method
                1                                        1
  ˆC
  Fs,n[f ] =        p(x) cos πnxdx,          ˆS
                                             Fs,n =          p(x) sin π(n − 1 )xdx,
                                                                            2
               −1                                       −1
where s = min{c1, cν }. Moreover, integrating by parts,
                            1      1
                 C
               ˆs,n[f ] = −
               F                       ϕ(x) sin πnxdx,
                             nπ −1
                               1         1
               ˆS
               Fs,n[f ] =        1
                                             ϕ(x) cos π(n − 1 )xdx.
                                                            2
                            (n − 2 )π −1

                                                                                      43
  • Once we precompute f (2i+1)(ck ) for i = 0, . . . , mk − 1, k = 1, . . . , ν,
                              ˆC           ˆS
    computing the coefficients Fs,n[f ] and Fs,n[f ] for n ≤ m takes O(m)
    flops.

  • It is true that
         Fs,n[f ] ∼ fn + O n−2s−2 ,
         ˆC         ˆC                         Fs,n[f ] ∼ fn + O n−2s−2 .
                                               ˆS         ˆS


EXOTIC QUADRATURE
In order to deal with small n, where the asymptotic expansion makes no sense,
the idea is to utilise derivative values that have been already computed in the
implementation of the Filon method, together with a single additional compu-
tation of f (0), to approximate coefficients corresponding to (very) low values
of n.

The situation is reminiscent of Gauss–Turan quadrature but this exotic quadra-
ture is different from GT. Specifically,
                1                         ν   mk −1
                      h(x)dx ≈ 2h(0) +                bk,ih(2i+1)(ck ).
               −1                        k=1 i=1

                                                                            44
Suppose that mk ≡ 1: only this case is understood comprehensively. We
assume that ν = 2µ is even.

THEOREM (AI & Nørsett) Let ζ1, . . . , ζµ−1 be the zeros of the (µ − 1)-degree
                                                                 1
orthogonal polynomial w.r.t. the measure (1 −                x)x 2 dx,   x ∈ [−1, 1]. We set
ζµ = 1 and let

               cµ+1−k = − ζk ,           cµ+k =       ζk ,       k = 1, . . . , µ
and choose b1, . . . , bν to maximise order p of the quadrature
                          1                             ν
                              h(x)dx ≈ 2h(0) +                bk h′(ck ).
                         −1                            k=1
Then p = 4ν − 2 and this is the highest order attainable in exotic quadrature.

The more general case, in particular when we allow higher odd derivatives at endpoints, is
open. We can solve it by brute force for small ν, but not in general.


                                                                                       45
EXAMPLES
                              √       √
CASE 1 s = 1, c             11
                   = [−1, − 7 ,       11 , 1],   m = [1, 1, 1, 1]:
                                      7
                                                               √           √
 ˆ C [f ] = AC [f ]− 147 (−1)n {11[f ′ (1)−f ′ (−1)]−7√11[f ′ ( 11 )−f ′ (− 11 )]},
 F1,n       ˆ1,n
                     209 (nπ)4                                  7           7
                                              √           √
 ˆ S [f ] = AS [f ]− 49 (−1)n−1 [f ′ (1)−f ′ ( 11 )−f ′ (− 11 )+f ′ (−1)].
 F1,n       ˆ1,n
                     19     1   4
                         [(n− 2 )π]             7          7

Asymptotic decay of the error is O n−4 . Exotic quadrature,
                                          √        √            √
              37 [h′ (1) − h′ (−1)] + 2401 11 [h′ ( 11 ) − h′ (− 11 )]
     2h(0) + 2280                      25080        7            7
is of order 8.

CASE 2 s = 2, c = [−1, −α, α, 1], m = [2, 1, 1, 2], where
                       1               √
                  α=      2937870 − 930 5879841.
                     1860
Asymptotic order is O n−6 , while exotic quadrature (using f (0), first deriva-
tives at ±1, ±α and third derivatives at ±1) is of order 10.
                                                                               46
CASE 3 s = 2, c = [−1, −β, β, 1], m = [2, 2, 2, 2], where β ∈ (0, 1) is a
zero of

         51150x8 − 136939x6 + 88847x4 − 18373x2 + 1331,
namely β ≈ 0.741581771093504943408000. This ensures that the order
of exotic quadrature is 12. The asymptotic order of error decay is O n−6 .
Explicit formulæ (both for Filon and for exotic quadrature) can be derived easily
using straightforward algebra.

To recap, in the present case we precompute the data

                  f (0), f ′(±1), f ′′′(±1), f ′(±β), f ′′′(±β),
subsequently evaluating the first m modified Fourier coefficients (with Filon,
                                         ˆC
except that we use exotic quadrature for Fs,0[f ]) in O(m) flops.



                                                                            47
                                                        10 −4                                                 10 −4

                                                       44                                                     44

                                                       40                                                     40

 0.02                                                  36                                                     36

                                                       32                                                     32


0.015                                                  28                                                     28

                                                       24                                                     24

                                                       20                                                     20
 0.01
                                                       16                                                     16

                                                       12                                                     12
0.005
                                                        8                                                      8

                                                        4                                                      4

  0.0                                                   0                                                      0
        2   4   6   8   10   12   14   16   18   20             2   4   6   8   10   12   14   16   18   20           2   4   6   8   10   12   14   16   18   20




                     ˆC         ˆC              ˆS         ˆS
Scaled errors n2s+2 |Fs,n[f ] − fn | and n2s+2 |Fs,n[f ] − fn | for cases 1–3 and f (x) = ex .
                             Cosine terms
    n       m = [1, 1, 1, 1] m = [2, 1, 1, 2]                                   m = [2, 2, 2, 2]
    0             1.65−06          −1.98−08                                           3.90−10
    1           −8.87−05              2.38−06                                       −4.08−09
    2             1.07−04          −2.62−06                                           9.84−10
    3             2.52−05             2.76−07                                         3.62−09
   10             2.28−07          −2.26−10                                         −5.19−12
                                                 Sine terms
    n       m = [1, 1, 1, 1]                     m = [2, 1, 1, 2]               m = [2, 2, 2, 2]
    1           −2.49−03                                 1.13−04                    −1.35−06
    2             1.50−03                                6.90−05                    −3.21−07
    3             2.17−04                              −3.58−06                     −8.26−08
   10           −1.10−06                                 1.25−09                      4.71−11
                ˆC         ˆC                   ˆS         ˆS
Absolute errors Fs,n[f ] − fn (on the left) and Fs,n[f ] − fn for f (x) = ex .
                                                                                                                                                           48
Univariate Birkhoff series

What causes the coefficients of modified Fourier to decay faster than those of
classical Fourier?
             sin πnx,   cos π(n − 1 )x,
                                  2       cos πnx,             1
                                                     sin π(n − 2 )x
are eigenfunctions of the harmonic oscillator
                        u′′ + α2u = 0,      x ∈ [−1, 1]
– the first two with Dirichlet b.c. and the remaining with Neumann b.c. The
secret is in the boundary conditions! For suppose that u′(±1) = 0. Then,
integrating twice by parts,
         1                 1 1
           f (x)u(x)dx = − 2      f (x)u′′(x)dx
        −1                 α −1
                           1                       1
                       = − 2 f (x)u ′ (x) +1 + 1     f ′(x)u′(x)dx
                           α              −1   α2 −1
                         1 ′            +1    1 1 ′′
                       = 2 f (x)u(x) −1 − 2        f (x)u(x)dx.
                         α                   α −1
Since, by the Weyl Theorem, αn = O(n), it follows that the integral is O n−2 .
                                                                         49
FASTER DECAY OF THE EXPANSION COEFFICIENTS
We consider Birkhoff expansions (expansions in eigenfunctions of a differential
operator). A great deal is know about such expansions with regard to their convergence and
other features [Birkhoff, Keldysh, . . . ]. In place of the harmonic oscillator, we thus
consider the p-harmonic oscillator with Neumann boundary conditions,
 u(2p) − (−1)pα2pu = 0,              u(i)(±1) = 0,        i = p, p + 1, . . . , 2p − 1.
Such functions have been already considered by Mark Krein (1939), who
proved many of their features, presented explicitly in the case p = 2 by V.N.
Fadeeva and used by Andrei Kolmogorov in his theory of n-widths.

Let u be such an eigenfunction. Then, integrating repeatedly by parts,
             1           (−1)m      (2m) = · · · = 1 f (m), u(m) .
 f, u =    f (x)u(x)dx =       f, u
        −1                α2m                     α2m

Conclusion 1: The eigenfunctions are orthogonal.

Intermezzo: Krein proved that, denoting the linear space of eigenfunctions
corresponding to positive eigenvalues by Hp, it is true that
                             Hp ⊕ Pp−1 = L2[−1, 1].
                                                                                    50
Conclusion 2: Iterating the identity for f ∈ C∞[−1, 1],
           ∞                   p−1
              (−1)p(n+1)
   f, u ∼                    (−1)j [f (p(n+1)+j)(1)u(p−1−j)(1)
          n=1    α2pn    j=0
                                           − f (p(n+1)+j)(−1)u(p−1−j)(−1)].


An explicit form of eigenfunctions: There are two types of eigenfunctions:
               ⌊p/2⌋
                                        kπ             kπ
      u(x) =           ak cos αx sin       cosh αx cos
               k=0
                                         p              p
                           ⌊p/2⌋−1
                                                     kπ             kπ
                       +             bk sin αx sin      sinh αx cos    ;
                            k=1
                                                      p              p
               ⌊p/2⌋
                                        kπ             kπ
      u(x) =           ck cos αx sin       sinh αx cos
               k=0
                                         p              p
                           ⌊p/2⌋−1
                                                    kπ               kπ
                         +          dk sin αx sin        cosh αx cos    .
                             k=1
                                                     p                p
Here α is a solution of a transcendental equation. Once α is known, we can
evaluate ak , bk , ck , dk easily by solving a linear system.
                                                                           51
Conclusion: u(i) ∼ O αi . Therefore it follows from the asymptotic formula
that for the nth eigenfunction
                                    −p−1 = O n−p−1 .
                         f, un ∼ O αn




                  1.0
                                          Example p = 2: Either
                  0.5
                                                      tan α + tanh α = 0,

                  0.0                                      cos αx   cosh αx
  −1.0    −0.5          0.0   0.5   1.0
                                          u2n−1(x) =              +
           x
                                                            cos α    cosh α
                 −0.5                     or

                 −1.0
                                                     tan α − tanh α = 0,
                                                          sin αx   sinh αx
                                               u2n(x) =          +         .
                                                           sin α    sinh α

                                                                           52
In the biharmonic case p = 2 each α2n−1 lies in the interval
                          1 )π, (n − 1 )π + e−2(n− 1 )π ],
                    [(n − 4                        4
                                     4
while each α2n lives in
                          1 )π − e−2(n+ 1 )π , (n + 1 )π].
                    [(n + 4             4
                                                    4
Few Newton–Raphson iterations are sufficient to compute αns to machine
precision.

THE EXPANSION
We assume in the sequel that p = 2 – everything scales up to general p.
Therefore, 0 is an eigenvalue of multiplicity 2, otherwise there are countably
many distinct, positive eigenvalues. Thus, the orthogonal approximation is
                                              m
                                 3ˆ
                  Fm(x) = 1 f0 + 2 f1 x +
                          2
                            ˆo      o              ˆ
                                                   fnun(x),
                                             n=1
where
                          1                       1
                 ˆo
                 f0 =          f (x)dx,   ˆo
                                          f1 =         xf (x)dx
                          −1                      −1

are non-oscillatory. Note that fn = f, un = O n−3 .
                               ˆ
                                                                         53
                         10 −4                                                   10 −4


                                 20
                                                                                         2

                                 16

                                                                                         1
                                 12                                         x
                                                                 −0.75   −0.5   −0.25        0.0   0.25   0.5   0.75
                                   8                                                     0

                                   4
                                                                                        −1
                                   0
         −0.75   −0.5   −0.25          0.0   0.25   0.5   0.75
                                                                                        −2
                    x            −4


                                 −8
                                                                                        −3

                                −12
                                                                                        −4




                         10 −5                                                   10 −6

                                                                                        10


                                                                                         8
                                 5.0

                                                                                         6


                                                                                         4
                                 2.5

                                                                                         2


                                                                                         0
                                 0.0
                                                                 −0.75   −0.5   −0.25        0.0   0.25   0.5   0.75
         −0.75   −0.5   −0.25          0.0   0.25   0.5   0.75
                                                                            x           −2
                    x

                                                                                        −4
                                −2.5

                                                                                        −6


                                                                                        −8
                                −5.0




Pointwise error in approximating f (x) = ex by Fm for m = 10, 20, 40 and 80, respectively.
                                                                                                                       54
Computation of polyharmonic integrals

The trick is to scale up our “p = 1 experience” to this setting.

ASYMPTOTIC EXPANSION
Truncating, we have
                         s−1                2p−1
                               (−1)(r+1)p
   [ρ
  ˆn s,j ,ρs,j ][f ] =
  Q                                            (−1)k [f (2pr+k)(1)un
                                                                      (2p−k−1)
                                                                                 (1)
                                 2(r+1)p
                         r=0    αn      k=p
                                                    (2p−k−1)
                                  − f (2pr+k)(−1)un          (−1)]
                                       p+j−1
                           (−1)(s+1)p                            (2p−k−1)
                         +                   (−1)k [f (2ps+k)(1)un        (1)
                             2(s+1)p
                            αn          k=p
                                                    (2p−k−1)
                                  − f (2ps+k)(−1)un          (−1)],
where s ≥ 0, j ∈ {0, . . . , p − 1} and
                               2ps − 1,             j = 0,
                 ρs,j =
                               (2s + 1)p + j − 1,   j = 1, . . . , p − 1
is the number of derivatives at ±1.
                                                                                 55
THEOREM It is true for every s ≥ 0 and j = 0, . . . , p − 1 that

               ˆ[ρ ,ρ ]
               Qn s,j s,j ∼ fn + O n−(2s+1)p−j−1 ,
                            ˆ                                        n ≫ 1.


FILON-TYPE QUADRATURE
We again choose nodes and weights, interpolate in a Hermite sense at the
nodes and integrate the product of the interpolation ψ and the eigenfunction
un. Note however that not all derivatives are required for the asymptotic ex-
pansion (we already have had this situation with modified Fourier: only odd
derivatives were required!). Therefore, what we really seek is Birkhoff–Hermite
interpolation: interpolate to some derivatives but not to others.

Such an interpolation problem is not always well defined, hence we need be careful. It is well
defined, however, in all cases of interest.



                                                                                       56
Let ψ be the interpolation and assume that we have used derivatives lower
than pth (i.e., the ones that don’t feature in the asymptotic expansion) at the
origin. Then
                  q−1                      x
                    1 (l)     l+    1
         ψ(x) =        f (0)x                (x − t)q−1ϕ(t)dt,
                l=0
                    l!           (q − 1)! 0
where ϕ interpolates higher derivatives only (hence is of lower degree). In that
case, assuming the weights m, m1 = mν = s, the Filon method is
                        1    1        (q)
           Qm[f ] =
           ˆn
                         2q
                                 ϕ(x)un (x)dx,       n = 1, 2, . . . ,
                        αn −1
and it is of asymptotic order (2s + 1)q + j + 1.

EXOTIC QUADRATURE scales up to this situation, although no general the-
ory is yet available.


                                                                           57
                                                                                                   0        10        20        30     40   50     60        70        80        90    100

 0.7
                                                                                                                                                                                                       0.11

                                                                                          −0.025                                                                                                        0.1
 0.6
                                                                                                                                                                                                       0.09
                                                                                           −0.05
                                                                                                                                                                                                       0.08
 0.5
                                                                                          −0.075                                                                                                       0.07


 0.4                                                                                                                                                                                                   0.06
                                                                                            −0.1
                                                                                                                                                                                                       0.05

 0.3                                                                                                                                                                                                   0.04
                                                                                          −0.125

                                                                                                                                                                                                       0.03

 0.2                                                                                       −0.15                                                                                                       0.02

                                                                                                                                                                                                       0.01
                                                                                          −0.175
 0.1

       0   10   20   30   40   50     60        70        80        90    100                                                                                                                                 0        10        20        30     40   50   60   70   80   90   100




                 ˆ[2,2] ˆ       ˆ[3,3] ˆ          ˆ[6,6] ˆ
Scaled errors n4(Qn − fn ), n7 (Qn − fn ) and n8 (Qn − fn ) for f (x) = ex and q = 2.


                                                                                                                                                  0.01

                                     0.03                                                                                                                0        10        20        30     40   50    60        70        80        90    100
                                                                                                                                                   0.0

                                    0.025
                                                                                                                                                 −0.01

                                     0.02
                                                                                                                                                 −0.02

                                    0.015
                                                                                                                                                 −0.03

                                     0.01
                                                                                                                                                 −0.04

                                    0.005
                                                                                                                                                 −0.05

                                      0.0
                                            0        10        20        30     40   50      60        70        80        90    100




   Scaled errors n4 |Q[2,2,2,2] [f ] − fn | and n7 |Q[3,3,3,3] [f ] − fn | for f (x) = ex and q = 2.
                     ˆn                ˆ            ˆn                ˆ

                                                                                                                                                                                                                                                                                58
Modified Fourier in a cube

The basic expansion

On the face of it, the univariate case generalizes easily to [−1, 1]d by Carte-
sian products. However, complexity increases fast and to make life easier we
need to employ the right notation.

We let
                                               
                                                1,
                                               2       n = 0, j = 0,
    [0]            [1]                   [j]
   µn = n, µn = n − 1 ,
                    2                cn =          0,   n = 0, j = 1,
                                               
                                               
                                                   1,   n ≥ 1, j ∈ {0, 1},
    [0]                            [1]               1
   un (x) = cos πnx,              un (x) = sin π(n − 2 )x,
            d
                            m       m      m        m
   |e| =         ek ,      ∂x = ∂x11 ∂x22 · · · ∂xdd        (multiindex notation),
           k=1
   Sα[f ](y ) =            (−1)|e|+e⊤ αf ((−1)e1 y , (−1)e2 y , . . . , (−1)ed y ).
                                                  1          2                  d
                    e∈Zd
                       2

                                                                                     59
THEOREM The normalised Laplace–Neumann eigenfunctions in [−1, 1]d are
                  d
      [α]               [α ]
    u n (x ) =         unjj (xj ),         nj ≥ αj ,   j = 1, . . . , d,   α ∈ Zd .
                                                                                2
                 j=1
Moreover,
                                                        [α] ˆ α] [α]
                                                             [
                 C∞[−1, 1]d ∋ f ∼                      cn f n u n ,
                                           n∈Zd α∈Zd
                                              +    2

                 [α]           0,             ∃j ∈ χ(n) s.t. αj = 1,
                 cn =
                               2−#χ(n),       otherwise,
where χ(n) = {i : ni = 0}, #S is the number of terms in the set S and

                  ˆ[α]
                  fn =
                                                [α]
                                          f (x)un (x)dx1 · · · dxd.
                                [−1,1]d




                                                                                  60
                        ˆ      [α]
THEOREM Each coefficient fn           has the asymptotic expansion
                           ∞                                2j +1
                              (−1)m                Sα[∂x         f ](1)
     ˆ[α]
     fn ∼ (−1)|n|+|α|
                          m=0 π
                                2m+2d         [α1 ] 2j1 +2        [αd ] 2jd +2
                                      |j |=m µn1           · · · µnd
for n1, · · · , nd ≫ 1.

Therefore,
                   ˆ[α]
                   fn ∼ O n−2d ,
                          ¯                ¯
                                           n = min{ni} ≫ 1.

                                                 ˆ[α]
If, however, some nis are small and other large, fn decays slower: it is easy
to work all this out in detail.

The univariate theory of convergence generalises at once to multivariate set-
ting.


                                                                                 61
                                  0.6                                              0.3

                                  0.5                                             0.25

                                  0.4                                              0.2

                                  0.3                                             0.15

                        −1.0                       −1.0               −1.0                          −1.0
                                  0.2                                           0.1
                                   y    x                                        y       x
                                −0.5     −0.5                                 −0.5
                                                                               0.05       −0.5
                                  0.1

                                 0.0
                                  0.0   0.0                                       0.0
                                                                                   0.0   0.0


                         0.5                       0.5                 0.5                          0.5

                  1.0                                     1.0   1.0                                        1.0




                                 0.04
                                                                                  −4
                                                                             10

                                 0.03
                                                                                  15.0

                                 0.02                                             12.5

                                    y   x                                         10.0
                                 0.01
                               −0.5         −0.5
                                                                                   7.5

                                 0.0
                                  0.0   0.0                                     5.0
                                                                                  y      x
                                                                             −0.5            −0.5
                        0.5                         0.5                         2.5

                                                                                  0.0
                                                                                   0.0   0.0


                                                                      0.5                            0.5




The error in approximating ex−2y with n1 , n2 ≤ 20 (left) and n1 , n2 ≤ 40 (right) terms in
                                                          9 9
                               [−1, 1]2 (top row) and [− 10 , 10 ] (bottom).

                                                                                                                 62
Calculating the coefficients

The most obvious is asymptotic quadrature
                               N −1                                 2j +1
      [α]                          (−1)m               Sα[∂x        f ](1)
     An = (−1)|n|+|α|                2m+2d              2j1+2             2jd +2
                                                                                 .
                               m=0 π               [α ]
                                           |j |=m µn11
                                                                     [α ]
                                                              · · · µndd
Thus, we need to compute all the odd derivatives, up to the (2N − 1)st, at the
2d vertices of the cube. The cost of computing all the coefficients for nk ≤ M
is M d and
           [α] ˆ[α]
          An = fn + O n−2(N +d) ,
                      ¯                                ¯
                                                       n = min{n1, . . . , nd}.


                                         ¯
Of course, all this makes sense only for n ≫ 1: we will later address the other cases.




                                                                                         63
Extended Filon methods on a tartan grid

We wish to evaluate derivatives mostly at the vertices, but also at points in a
tartan grid, eg. in 2D
                                                   s   s s   s   s   s s   s

                                                   s   s s   s   s   s s   s
                                                   s   s s   s   s   s s   s


                                                   s   s s   s   s   s s   s

                                                   s   s s   s   s   s s   s


                                                   s   s s   s   s   s s   s
                                                   s   s s   s   s   s s   s

                                                   s   s s   s   s   s s   s


Specifically,

  R = {(ri1 (−1)e1 , . . . , rid (−1)ed ) : e ∈ Zd , i1, . . . , id ∈ {1, . . . , ν}},
                                                 2
where 0 < r1 < r2 < · · · < rν = 1.


                                                                                   64
The method is
                                     N +M −1
   [α]      [ α]                                   (−1)m
  Qn,N,M = An,N + (−1)|n|+|α|
                                       m=N        π 2(m+d)
                                                                     [α]
                                                                   Ej [f ]
                                                                                           ,
                                                        [α1 ] 2j1 +2        [αd ] 2jd +2
                                                |j |=m µn1           · · · µnd
where
             [α]                     [ α]
                                        i                    2j +1
           Ej [f ] =                σj ∂xf (c) ≈ Sα[∂x               f ](1),
                       c∈R i∈Π(c)
where N ≤ |j | ≤ N + M − 1, while Π(c) is the index set of all derivatives
evaluated at c ∈ R,

                   i ∈ Π(c)      ⇒                       i
                                            we evaluate ∂xf (c).
                           [α]
It is useful to think about En as a linear combination of derivatives intended
                       2j +1
to approximate Sα[∂x         f ](1).

                                                                                    65
                                     [α]
Explicit construction of En

                                                                                     [i,s]
PROPOSITION Given s ∈ {0, . . . , µ}, there exist coefficients ak                             such that
                  ν
                            [0,s]
                        ak          sinh rk θ = θ2s sinh θ + O θ2ν+1 ,
               k=1
                ν
                        [1,s]
                       ak         cosh rk θ = θ2s cosh θ + O θ2ν .
               k=1



THEOREM The sum
                                                          
              ν               ν          d
                                               [αj ,sj ]
                      ···                    ak            Sα[∂ 1 f ](r , . . . , r )
                                                   j             x       k1          kd
            k1=1            kd=1        j=1

matches Sα[∂xs+1f ](1) for all polynomials f of degree 2ν + 1 in each vari-
            2

able.

                                                                                                  66
Extended Filon on the tartan grid

            2
1. Compute ∂xj +1f , j = 0, . . . , N − 1, at the vertices of the cube.              (The
    information required for the asympotic method.)

            1
2. Compute ∂xf on the tartan grid. (Altogether, (2ν)d function evaluations.)
3. Form
        [ α] (−1)m+|α|        2j +1
       σj =            Sα[∂x        f ](1),      m ≤ N − 1, α ∈ Zd ,   2
              π 2(m+d)                             
                             ν          d
        [ α] (−1)m+|α|                     [α ,j ]    1
       σj =                               ak i i  Sα[∂xf ](rk1 , . . . , rkd )
              π 2(m+d) k ,...,k =1 i=1 i
                        1      d

   for N ≤ m ≤ M − 1 and α ∈ Zd .
                              2

4. Form
                                      M                       [ α]
               [ α]
                                                             σj
            Qn,N,M = (−1)|n|                                                     .
                                    m=0 |j |=m µ[α1]
                                                     2j1+2        [αd ] 2jd +2
                                                n1         · · · µnd
                                                                                     67
Exotic quadrature

All this can be accomplished when n1, n2, . . . , nd ≫ 1 and we are in the
asymptotic regime.

Otherwise, i.e. if either all or some of the nk s are small, we use exotic quadra-
ture. The rules of the game are as follows:

We compute f (0), otherwise are allowed only the derivative values of f that
we have already computed either at the vertices or on the tartan grid. We form
a linear combination of all these to approximate relevant coefficients.

We have an extensive theory how to choose explicitly optimal linear combina-
tions for each combination of “small” and “large” coefficients. It follows along
the same lines as Filon-on-tartan but is, if at all, even more detail-heavy.
In the spirit of international friendship, we will spare the audience these gory
details.
                                                                             68
Polynomial subtraction and the hyperbolic cross
Back to one dimension. . . . A powerful alternative to modified Fourier ex-
pansion, a technique that accelerates convergence of classical Fourier ex-
pansions for non-periodic functions, is polynomial subtraction [Kantorovich &
Krylov].

Standard Fourier coefficients decay like O n−1 . However, suppose that p is
a polynomial such that

               p(j)(±1) = f (j)(±1),        i = 0, . . . , r − 1.
We let f = p + g, i.e. g = f − p, and Fourier-expand g. In that case

                             ˆn ∼ O n−r−1 .
                             g
This is not an alternative to modified Fourier: combining modified Fourier
with polynomial subtraction makes everything even more efficient!

                                                                        69
We seek a polynomial p such that

             p(2k+1)(±1) = f (2k+1)(±1),                   k = 0, . . . , r − 1,
represent f = p + g and modified-Fourier-expand g. In that case

                                 ˆn , ˆn ∼ O n−2r−2 .
                                 gC gS


LEMMA Let θ1,±(x) = 1 (x ± 1 x2) and
                    2      2
                                          x
                                1 b x2 +
                 θn+1,+ = anx + 2 n         (x − t)θn,+(t)dt,
                                         0
and θn+1,−(x) = −θn+1,+(−x), where
          1                                              1
       1
an = − 2    [θn,+(t) − θn,+(−t)]dt,                   1
                                               bn = − 2    [θn,+(t) − θn,+(−t)]dt.
         0                                              0
Then, for all l = 0, . . . , n − 1,

        (2l+1)              0,     l = 0, . . . , n − 2,         (2l+1)
      θn,±       (±1) =                                        θn,±       (∓1) = 0.
                            1,     l = n − 1,

                                                                                      70
                 12 terms                                                       24 terms                                                  48 terms                                              96 terms
                          −3                                                                                                                   −4                                                        −5
                     10                                                           10 −3                                                   10                                                        10


                               5.0                                                                                                                   5.0
                                                                                                                                                                                                               15

                                                                                             2
                               2.5                                                                                                                                                                             10
r=0


                x                                                                                                                                    2.5
      −0.75   −0.5   −0.25           0.0   0.25   0.5   0.75                                                                         x
                                                                                             1                                                                                                 x                5
                               0.0                                                                                         −0.75   −0.5   −0.25            0.0   0.25   0.5   0.75
                                                                                                                                                                                     −0.75   −0.5   −0.25           0.0   0.25   0.5   0.75
                                                                                                                                                     0.0
                                                                                                                                                                                                                0

                               −2.5                                                          0
                                                                          −0.5                   0.0          0.5                                                                                               −5
                                                                                                                                                     −2.5
                                                                             x
                               −5.0                                                         −1                                                                                                                 −10

                                                                                                                                                     −5.0
                                                                                                                                                                                                               −15
                               −7.5
                                                                                            −2



                                                                                      −7
                     10
                          −6                                                     10                                                       10
                                                                                                                                               −8
                                                                                                                                                                                                    10
                                                                                                                                                                                                         −9



                                                                                                                                                                                                                2
                                 4
                                                                                             2                                                         2

                                                                                                                                                                                                                1
                                                                            x
                                 2                                                                                                                     1
                                                                  −0.75   −0.5   −0.25           0.0   0.25   0.5   0.75
r=1




                                                                                             0

                                                                                                                                                                                                                0
                                                                                                                                                       0
                                 0                                                                                                                                                   −0.75   −0.5   −0.25           0.0   0.25   0.5   0.75
                                                                                                                           −0.75   −0.5   −0.25            0.0   0.25   0.5   0.75
      −0.75   −0.5   −0.25           0.0   0.25   0.5   0.75                                                                                                                                   x
                                                                                                                                     x
                x                                                                           −2
                                                                                                                                                      −1
                                                                                                                                                                                                                −1
                                −2

                                                                                                                                                      −2
                                                                                            −4



                          −9                                                                                                                   −13                                                       −14
                     10                                                          10 −11                                                   10                                                        10

                                 2                                                                                                                                                                              2
                                                                                                                                                      10

                                                                                           5.0

                                 1
                                                                                                                                                       5                                                        1
r=2




                x
                                                                                           2.5                                       x                                                         x
      −0.75   −0.5   −0.25           0.0   0.25   0.5   0.75
                                                                                                                           −0.75   −0.5   −0.25            0.0   0.25   0.5   0.75   −0.75   −0.5   −0.25           0.0   0.25   0.5   0.75
                                 0
                                                                                                                                                       0                                                        0
                                                                                           0.0
                                                                          −0.5                   0.0          0.5
                                −1
                                                                             x                                                                        −5                                                        −1
                                                                                           −2.5

                                −2
                                                               f (x) = ex                  −5.0
                                                                                                                                                     −10
                                                                                                                                                                                                                −2




                                                                                                                                                                                                                                       71
Polynomial subtraction in [−1, 1]2

A naive approach: Interpolate the odd derivatives of f at the vertices and
proceed as before, using Cartesian products. This however will not help at all
                [α]
in evaluating ˆn1,n2 when 0 ≤ n1 ≪ n2, say. Instead, we need to fit normal
              g
derivatives along the entire boundary!

This can be done using techniques from computer-aided geometric design.
For example, it is easy to see that

  p(x, y) = θ1,−(x)fx(−1, y) + θ1,+(x)fx(1, y) + θ1,−(y)fy (x, −1)
            + θ1,+(y)fy (x, 1) − [θ1,−(x)θ1,−(y)fxy (−1, −1)
            + θ1,−(x)θ1,+(y)fxy (−1, 1) + θ1,+(x)θ1,−(y)fxy f (1, −1)
            + θ1,+(x)θ1,+(y)fxy (1, 1)]
obeys
               ∂p(x, y)   ∂f (x, y)
                        =           ,     (x, y) ∈ ∂[−1, 1]2.
                 ∂n          ∂n
                                                                         72
Letting g = f − p, we thus have
                   [ α]
                 ˆn1,n2 ∼ O n−8 ,
                 g          ¯                  ¯
                                               n = min{n1, n2} ≫ 1,
but also, for example,
                           [α]
                          ˆn1,n2 ∼ O n−4 ,
                          g           2                n1 ≪ n2.


This can be generalised to all r ≥ 1 (technically complicated!) and, by an
iterative process, to the cube [−1, 1]d for all d ≥ 1.

Next page: the error committed for f (x, y) = sin(1 + x − 2y) and r = 0, 1, 2 (top to bot-
tom) with n1 , n2 ≤ K, where K = 10, 20, 40 (left to right).




                                                                                    73
       −3                                      10
                                                   −3                                             10−5
  10
                                                                     2                                       15
                    5.0                                                                                      10
                       y x                                                                                   y        x
                  −0.5     −0.5                                      y
                                                                     1   x                                −0.5             −0.5
                    2.5
                                                              −0.5           −0.5                                5
                    0.0
                     0.0   0.0                                                                              0.00      0.0
                                                                0.00     0.0
            0.5     −2.5              0.5                                                                        −5
                                                        0.5                         0.5            0.5                             0.5
                    −5.0                                          −1                                         −10

                                                                                                             −15




   −5                                           −7
 10                                           10
                                                                                          10−8
                      2
                                                                 15                                               3
                      y    x
                  −0.5 1       −0.5                              10                                               2
                                                                                                                  y   x
                                                                   y     x                                 −0.5           −0.5
                                                              −0.5           −0.5                                 1
                                                                   5
                    0.00   0.0
                                                                                                            0.00      0.0
                                                                0.00     0.0
        0.5                            0.5                                                          0.5          −1               0.5
                      −1
                                                     0.5          −5                0.5                          −2

                      −2                                        −10                                              −3




10−8                                         10−10                                        10−12
                                                                 15

                     5.0                                                                                     10
                       y   x                                     10
                  −0.5
                     2.5       −0.5                               y      x                                     y      x
                                                              −0.5 5         −0.5                         −0.5 5          −0.5
                    0.0
                     0.0   0.0
                                                                0.00     0.0                                0.00      0.0
        0.5         −2.5               0.5
                                                     0.5          −5                0.5             0.5                           0.5
                                                                                                                 −5
                    −5.0




                                                                                                                                         74
The incredible shrinking coefficients

                                                         [r]     [0,0]
Let us assume d = 2, consider the function ex−y and let cm,n = ˆm,n : We
                                                               g
have
                                            
                                             (π2−6)2 sinh2 π
                                            
                                            
                                                             ,        m=n=0,
                                                  9π 2
                                            
                                             2(−1)m(π2−6) sinh2 π
                                            
                                                                  ,   m∈N, n=0,
   [0]   4(−1)m+n sinh2 π        [1]      3π 2 m2        2
  cm,n = π2(1+m2)(1+n2) ,       cm,n = 2(−1)n(π2(1+m ) 2 π
                                      
                                                  −6) sinh
                                                            ,          m=0, n∈N,
                                      
                                         3π 2 n2 (1+n2 )
                                      
                                       4(−1)m+n sinh2 π
                                      
                                      
                                       2 2 2         2     2 ,        m,n∈N;
                                              π m n (1+m )(1+n )
          
           (7π4−60π2+360)2 sinh2 π
          −
          
                                   ,         n=m=0,
                    32400π 2
          
           (−1)m(7π4−60π2+360) sinh2 π
          
                                       ,     m∈N, n=0,
   [2]                2 4     2
                90π m
  cm,n = (−1)n(7π4−60π(1+m ) sinh2 π
                          2 +360)
        
                                    ,        m=0, n∈N,
        
               90π 2 n4 (1+n2 )
        
         4(−1)m+n sinh2 π
        
        
         2 4 4        2       2 ,            m,n∈N.
            π m n (1+m )(1+n )




                                                                                75
The hyperbolic cross [Babenko]




                 [r]
 Contour lines |cm,n | > 10−k for k = 2, 3, . . . , 9 in the coefficient lattice 0 ≤ m, n ≤ 100.


Clearly, we can throw away small coefficients without damaging accuracy: this
leaves out only a small portion of the coefficient lattice, in the shape of hyper-
bolic cross. Although similar phenomenon is valid for Fourier expansions, in
our case it is much more important, because throwing away part of the co-
efficients is inconsistent with using FFT – but we don’t use FFT at the first
place!
                                                                                           76
How many points in a hyperbolic cross?

Let
                   Kd(t) = {n ∈ Zd : n1n2 · · · nd ≤ t}.
                                 +   ˜ ˜        ˜
      ˜
where m = max{m, 1}. We denote by κd(t) the number of points in the
hyperbolic cross Kd(t).

LEMMA It is true that
                    t(log t)d−1
            κd(t) =             + lower-order terms,         d ≥ 1.
                      (d − 1)!


Let N be a number (dependent on required accuracy and on r ≥ 0) such that,
in principle, we wish to use all coefficients s.t. n1, . . . , nd ≤ N – altogether
O N d terms. Our analysis means that is it enough to use the coefficients in
Kd(N ), and there are only O N (log N )d−1 of these!

                                                                            77
Towards general multivariate theory

Unlike Fourier expansion, our technique generalises (at least in principle) to
any bounded domain Ω ⊂ Rd with piecewise-smooth boundary.

THE MULTIVARIATE FRAMEWORK

Let Ω ⊂ Rd be a bounded domain with piecewise-smooth boundary of fi-
nite length. W.l.o.g. we assume that it is simply connected. The Dirichlet–
Neumann problem
                                           ∂u
             ∆u = λu, x ∈ Ω,                  = 0, x ∈ ∂Ω
                                           ∂n
has a countable number of nonpositive eigenvalues λn, which accumulate at
infinity. Each eigenvalue is simple and is associated with the eigenfunction
un. The eigenfunctions form an orthogonal set, which is dense in L2(Ω).
Moreover, λ0 = 0, u0 ≡ 1.

                                        2
THE WEYL THEOREM: λn ∼         meas(Ω)n d .
                                                                         78
Let f ∈ L2(Ω). Then, for n ≥ 1, using the Stokes theorem and incorporating
boundary conditions,
                                     1
         ˆ
         fn =     f (x)un(x)dV =          f (x)∆un(x)dV
               Ω                     λn Ω
               1             ∂un(x)      1
           =          f (x )        dS −       ∇f (x) · ∇un(x)dV
              λn ∂Ω            ∂n        λn Ω
                 1
           =−          ∇f (x) · ∇un(x)dV.
                λn Ω
Another application of Stokes results in
                           1       ∂f (x)              1
                  ˆ
                  fn = −                  un(x)dS +      ∆f n.
                          λn ∂Ω ∂n                    λn
Iterating this expression results in the asymptotic expansion
                  ∞
                        1          k ∂f (x) u (x)dS = O λ−1 .
           ˆ
           fn ∼                   ∆          n           n
                     λk+1 ∂Ω
                  k=0 n
                                        ∂n
The above is given in terms of surface integrals: not a practical means to eval-
     ˆ
uate fn. However, it tells us exactly which derivative information is required for
given asymptotic order, and this is crucial in the design of Filon-type methods.
                                                                             79
APPROXIMATION IN A PLANE

The main snag is that we must know the eigensystem explicitly. The only
planar Ωs where this is known are




The most intriguing is the equilateral triangle T , for the following reason. Sup-
pose that we wish to approximate f in any planar domain Ω with piecewise-
linear boundary. We can partition Ω into (large) triangles Tk , k = 1, . . . , s,
say. Now, for every Tk there exists a one-to-one affine map χk : T → Tk ,
χk : ∂T → ∂Tk . We thus approximate f (χk (x)) in T , map the outcome to
Tk and tile Ω with s such approximations.



                                                                            80
THE EQUILATERAL TRIANGLE

                                              ´
Although the spectrum was already known to Lame, the most useful from is
         ´
due to Prager:
                   πmx cos πny − 2(−1) n+m sin π(3n+m) cos π(n−m)y
 um,n(x, y) = 2 sin √                    2         √
                      3                           2 3         2
                         n−m
              + 2(−1) 2 sin π(3n−m)x cos π(n+m)y ,
                                  √
                                               2
                                 2 3
                    πmx cos πny + 2(−1) n+m cos π(3n+m) cos π(n−m)y
 vm,n(x, y) = 2 cos √                     2         √
                       3                           2 3          2
                         n−m
              + 2(−1) 2 cos π(3n−m)x cos π(n+m)y ,
                                   √
                                                2
                                  2 3
                                          1
both with the eigenvalue λm,n = π 2(n2 + 3 m2). Here 1 ≤ m ≤ n for the odd
eigenfunction um,n, 0 ≤ m ≤ n for even vm,n. In both cases m ≡ n (mod 2).




                                                                     81
                                                                                                     The function f (x, y) = ex−y .




      Approximations (above) and errors (below) for m, n ≤ K for K = 4, 10, 20.
                                                                                                                                             10 −3


                                                                               0.035
                                                                                                                                          15.0
                    0.1                                                        0.03
                                                                                                                                          12.5
                                                                               0.025
                    0.075                                                                                                                 10.0
                                                                               0.02

                                                                               0.015                                                      7.5
                    0.05
                    x      −0.5                                                x     −0.5                                                 x5.0   −0.5
                                                                                0.01
                     0.025
                   −0.25                                                      −0.25
                                                                                0.005                                                    −0.25
                                                                                                                                           2.5
             0.0     0.0
                    0.0                                                 0.0     0.0
                                                                               0.0                                                 0.0     0.0
                                                                                                                                          0.0

      0.25                   0.25                                0.25                   0.25                                0.25                     0.25
                                          y                                                          y                                                            y
0.5                                                        0.5                                                        0.5
                                    0.5                                                        0.5                                                          0.5

                                              0.75                                                       0.75                                                         0.75

                                                     1.0                                                        1.0                                                          1.0




                                                                                                                                                                                   82
Application to spectral problems
[H. Brunner, AI & S.P. Nørsett]

Given g ∈ C[−1, 1] and ω > 0, we seek the solution of the Fredholm spectral
problem
                1
                    f (x)eiωg(x−y)dx = λf (y),       y ∈ [−1, 1].
               −1


  • The Fredholm operator is compact, hence it has solely eigenvalues (i.e.,
    point spectrum), which accumulate at the origin.


  • The operator is symmetric, but not self adjoint (it is complex-valued!).


  • ω > 0 might be large, in which case the kernel is highly oscillatory. This
    isn’t essential to our argument! High oscillation will arise in our analysis
    regardless of the size of ω.

                                                                           83
The finite section method I: The naive approach

Approximate the integral by quadrature. This leads to the algebraic eigenvalue
problem
               N
                    bk fk eiωg(ck −cm) = λfm,     −N ≤ m ≤ N,
             k=−N
where ck and bk are the nodes and the weights respectively, while fk ≈ f (ck ).

SHORTCOMINGS:

 1. The k, m element of the matrix is of size |bk | = O N −1 , hence decays
    very slowly with N . Therefore, we require very large matrix!

 2. If ω ≫ 0 then the elements of the matrix exhibit high oscillation, unless N
    is really huge! This further increases the size of the matrix.

 3. For practical purposes, when N is really large, we are compelled to take
    equally-spaced points: ck = N and bk ≡ 2N2 . No benefits of “clever”
                                 k
                                                 +1
    quadrature!
                                                                          84
The finite section method II: Orthogonal basis

Let B = {ϕ1, ϕ2, . . .} be an orthonormal basis of L2[−1, 1]. We seek to
represent the eigenfunction f within this basis,
                                          ∞
                                f (x) =         fk ϕk (x).
                                          k=1
Substituting into the Fredholm equation, multiplying by ϕm, integrating and
exploiting orthogonality, we obtain an infinite-dimensional algebraic eigenvalue
problem
                       ∞
                            Am,k fk = λfm,             m ∈ N,
                      k=1
where
                            1     1
               Am,k =                 ϕk (x)ϕm(y)eiωg(x−y)dxdy.
                           −1 −1
The main idea is to truncate the matrix A (hence “finite section”), under the
assumption that the eigenvalues of the finite section are near the true eigen-
values – this assumption is justified by compactness.
                                                                          85
Optimal basis

How to choose the functions ϕk ? The standard approach of Fourier analysis
is
      ϕ2n−1(x) = cos(n − 1)πx,        ϕ2n(x) = sin nπx,       n ≥ 1,
but this means that the Am,n ∼ O m−1n−1 .

However, once we choose the modified Fourier basis

   ϕ2n−1(x) = cos(n − 1)πx,         ϕ2n(x) = sin(n − 1 )πx,
                                                     2          n ≥ 1,

it is true that Am,n ∼ O m−2n−2 .

This is just the first step in much longer and more substantive numerical
analysis of such problems. But this is another lecture :-)


                                                                       86

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:0
posted:9/28/2013
language:Unknown
pages:86
xiaocuisanmin xiaocuisanmin
About