Docstoc

Deconvolution

Document Sample
Deconvolution Powered By Docstoc
					                    Deconvolution


    Olubode Caleb Adetunji (bode@aims.ac.za)
African Institute for Mathematical Sciences (AIMS)

           Supervised by: Professor George Smith
           University of CapeTown, South Africa


                          22 May 2009
 Submitted in partial fulfillment of a postgraduate diploma at AIMS
Abstract The central idea of deconvolution is to produce an enhanced image of the subterranean earth
through an iterative improvement algorithm. The recorded digital seismic data set is full of extraneous
signals that need to be removed. Deconvolution as a method of seismic data processing technique
is employed to yield the desired signal from the trace. The method of deconvolution is based on
the assumption that a seismic trace can be represented as a convolution of a reflectivity series with
seismic wavelet. The desired signal from a seismic trace is the white reflectivity series that identifies
closely-spaced reflecting interlayers of the earth.
The least-squares-error criterion is used to design a predictive spiking deconvolution operator, or filter,
to remove the unwanted wavelets from the trace, and yield the reflectivity series or function. The basic
purpose of a least-square filter is to convert an input signal into a desired output signal. In the design
of the filter, the autocorrelation of the input signal and the crosscorrelation of the desired output signal
with the input signal need to be computed.
The standard model for deconvolution of a seismic trace (actual output signal) is that a seismic trace
is a convolution of a signature wavelet (which consists of source energy wavelet, instrument impulse
responses, and near-surface effects) and a signature-free trace (made up of a white reflectivity series or
function and reverberation or multiples). Signature wavelet is otherwise referred to as the convolution
of an all-pass filter and a minimum-delay wavelet. In deconvolving a seismic trace, a minimum-delay
wavelet counterpart of the signature is designed in such a way that it has the same energy spectrum
as the known energy spectrum of the signature. The inverse of the minimum-delay wavelet counterpart
(spiking filter) is then computed by a polynomial division having transformed the minimum-delay wavelet
to its z-transform equivalent.
An all-pass filter, a component of the signature, is computed by convolving the obtained spiking filter
with the signature. Subsequently, the inverse of the all pass filter, which is its reverse with respect
to time zero, is convolved with the seismic trace in order to yield a dephased trace. The dephased
trace is, then, convolved with the spiking filter to obtain the signature-free trace. Another spiking
filter is then designed by the method of least- squares on the basis that the autocorrelation of the
signature-free trace is the same as the autocorrelation of the reverberation wavelet. This is due to the
uncorrelated white reflectivity that is averaged out in the computation of the autocorrelation of the
wavelet. Finally, the required white reflectivity series is obtained by convolving the second spiking filter
with the signature-free-trace.




Declaration

I, the undersigned, hereby declare that the work contained in this essay is my original work, and that
any work done by others or by myself previously has been acknowledged and referenced accordingly.




Olubode Caleb Adetunji, 22 May 2009

                                                     i
Contents

1 Digital Filtering and Wavelets Synthesis                                                                  1
   1.1   Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     1
   1.2   Sampling of Seismic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       1
   1.3   Digital Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    2
   1.4   Wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     3
   1.5   Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      4
   1.6   Minimum, Mixed and Maximum Delay . . . . . . . . . . . . . . . . . . . . . . . . . . .             5
   1.7   Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     6

2 Wavelet Processing                                                                                       10
   2.1   Shaping Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
   2.2   Spiking Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
   2.3   White Convolutional Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
   2.4   Wavelet Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
   2.5   All-Pass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
   2.6   Convolutional Model     . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
   2.7   Non Minimum-Delay Wavelet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
   2.8   Signature Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Deconvolution                                                                                            20
   3.1   Seismic Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
   3.2   Least-squares Prediction and Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . 21
   3.3   The Prediction-Error Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
   3.4   Spiking Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
   3.5   Gap Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
   3.6   Piece Meal Convolutional Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
   3.7   Time Varying Convolutional Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
   3.8   Random Reflection Coefficient Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4 Application of the Convolutional Model                                                                   28
   4.1   Water Reverberation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28


                                                     ii
    4.2   An Illustrative Example of Deconvolution of Non-minimum Delay Wavelet and Trace
          Based on the Convolutional Model Using Canonical Representation of a Seismic trace or
          a Wavelet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
    4.3   Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

A                                                                                                       33
    A.1 Dual Sensor Wavelet Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
    A.2 Einstein or Predictive Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
    A.3 Tail Shaping and Head Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
    A.4 Gap Deconvolution of a Mixed-delay Wavelet . . . . . . . . . . . . . . . . . . . . . . . 37
    A.5 Prewhitening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
    A.6 Convolutional Model in the Frequency Domain          . . . . . . . . . . . . . . . . . . . . . . 38

References                                                                                              44




                                                    iii
1. Digital Filtering and Wavelets Synthesis
1.1     Introduction

An exploration of the subsurface of the earth structure for the presence of natural resouces such as
petroleum or natural gas involves the use of artifically and measurable seismic source wavelet produced
by an array of airguns. The seismic wavelet will be introduced into the earth by a collection of seismic
soures at various position. And for each seismic source, an array of geophones or hydrophones, which
constitute a receiver will be positioned. The output on each receiver is a trace with respect to each
source position and receiver position. That is, a seismic source produces a collection of traces or a
seismic record. The desired output signal at the receiver is the reflection coefficients of interlayers of
the subsurface at various time, but in reality the output signal is a mixture of the reflectivity series with
some signal-generated noise. The motivation of this essay is to describe how the signal-generated noise
is eliminated in order to obtain an approximate estimate of the reflectivity series.


1.2     Sampling of Seismic Data

A continuous observed time function in Figure 1.1(a) is suitable for analysis on computer by digitally
approximating it in some way by a list of numbers. For example, a digital approximation of b(t) in
Figure 1.1(a) is to observe b(t) at a uniform spacing of time, say 4 miliseconds often used in seismic
data processing (which can be expressed in a time-index scale.). And such a digital approximation of
the continuous function could be denoted by the vector

                                 b(t) = (. . . , 0, 0, 1, 2, 0, −1, −1, 0, 0, . . .).                        (1.1)




       (a) A continuous time function (b) Coefficients of ZB(Z) are           (c) Response to two explosions
       sampled at uniform time inter- shifted version of the coeffi-
       vals.                          cients of B(Z)

                                         Figure 1.1: (a), (b) and (c)


Alternatively, the function b(t) can be represented by a polynomial where the coefficients of the poly-
nomial represent the values of b(t) at successive time points [Dra97] This polynomial representation is
known as the z−transform. In Figure 1.1(a), we have

                                      B(z) = 1 + 2z + 0z 2 − z 3 − z 4 ,                                     (1.2)



                                                          1
 Section 1.3. Digital Filtering                                                                          Page 2

where z is a unit delay operator [Dra97]. The coefficients of zB(z) = z + 2z 2 − z 4 − z 5 are plotted
in Figure 1.1(b), which is the same waveform as in Figure 1.1(a), but has been delayed by a unit time
[Dra97]. In other words, the time function b(t) is delayed n time units when B(z) is multiplied by z N .
The delay operator z is very important in analysing waves simply because waves take a certain amount
of time to get from place to place. Suppose b(t) represents the acoustic pressure function or seismogram
observed after a distant explosion. Then b(t) is called the impulse response. If another explosion occurs
at t = 10 time units after the first, the expected output pressure function y(t) is depicted in Figure
1.1(c). In terms of z-transforms, this would be expressed as Y (z) = B(z) + z 10 B(z). If the first
                                                                                             1
explosion were followed by an implosion of half strength, we would have Y (z) = B(z) − 2 z 10 B(z). If
pulses overlap one another in time (as would be the case if B(z) was of degree greater than 10), the
waveforms would add together in the region of overlap. The supposition that they just add together
without any interraction is called the linearity assumption [Dra97]. That is seismic waveforms satisfy
linear superposition. However, nonlinearity arises from larger amplitude disturbances and not from
geometrical complications.
As explained above, if there was an explosion at t = 0, a half-strength implosion at t = 1, and another,
quarter-strength explosion at t = 3, then the sequence of events determines a “source” time series,
           1      1
xt = (1, − 2 , 0, 4 ). The z-transfrom of the source is X(z) = 1 − 1 z + 1 z 3 . Thus, the output function
                                                                   2     4
displayed on a seismometer, yt , for this sequence of explosions and implosions has a z-transform Y (z)
in form of
                                                                           z3
                                    Y (z) = B(z) − z B(z) +
                                                   2                       4 B(z)                         (1.3)
                                                           z       z3
                                                = 1−       2   +   4       B(z)                           (1.4)
                                                    = X(z)B(z).                                           (1.5)
The last equation reveals that the output signal is the multiplication of wavelets in the frequency
domain,a function of z. Thus, in time domain, this results in convolution of the amplitude of each
wavelet at respective time.


1.3     Digital Filtering

The output of a digital filter is obtained by convolving the digitised input signal with the filter’s impulse
response. A finite impulse-response filter (FIR) produces an output called transfer function in time
domain, yn at time n,which is denoted by
                                                       N
                                                yn =           bi xn−i ,                                  (1.6)
                                                       i=0

where bi denotes component of the filter at time t = i and xn denotes component of the signal at
respective time, say t = n. The z-transform of this filter is an N th -order polynomial in z given by
                               B(z) = b0 + b1 z + b2 z 2 + b3 z 3 + . . . + bN z N .                      (1.7)
Its infinite length given by b = (b0 , b1 , b2 , . . .) is the power series in z, which is expressed as
                                        B(z) = b0 + b1 z + b2 z 2 + . . . .                               (1.8)
This means for a sample of input signal, the number xn is entering the filter as follows: x−2 at time
t = −2, x−1 at time t = −1, x0 at time t = 0, x1 at time t = 1 and xN at time t = N . The filter,
  Section 1.4. Wavelets                                                                                                   Page 3

then, produces output yn as follows: y−1 at time n = −1, y0 at time n = 0, y1 at time n = 1 and yN
at time n = N , in the case of a finite filter.
The impulse response function (or transfer function) of N cascaded filters is the convolution of the
impulse response function of each of the N cascaded filters. For example, given that filter a = (2, 1)
(length 2) and b = (3, 4) (length 2), the two-cascaded filter gives an impulse response of the convolution
a ∗ b. The convolution can be computed simply by first transforming each filter to its z-transform, and
multiplying the resulting polynomials. That is, the z-transform a and b are respectively, A(z) = 2 + z
and B(z) = 3 + 4z, and we have
                            (3 + 4z) × (2 + z) = 6 + 8z + 3z + 4z 2 = 6 + 11z + 4z 2 .
The convolution a ∗ b = c = (6, 11, 4). The idea of digital filter is important in the analysis of the
output waveform on the seismometer owing to the fact that the earth’s response to source wavelet
is similar to the filter’s effect on the signal [Dra97]. In general, the multiplication of the polynomials
(multiplication in frequency domain) corresponds to the convolution of the coefficients (convolution
in time domain).The multiplication in frequency domain as represented in equation 1.3 illustrates the
underlying basis of linear-system theory. Thus, the convolution of two impulse functions is represented
as
                                                              ∞
                                                      cn =         ak bn−k .                                                (1.9)
                                                             k=0



1.4       Wavelets

The energy of a digitised signal obtained from an analogue signal by sampling an analogue signal at
discrete time steps is the sum of the values of the product of the value of the amplitude and its
complex congugate at each respective time step ∆t. A wavelet is a signal that has finite energy
with bulk of its energy confined to a finite interval on the time scale [RT08]. A wavelet can be
expressed as b = (. . . , b−2 , b−1 , b0 , b1 , b2 , . . .), where bn is the coefficient representing the amplitude
at discrete time n. Every wavelet consists of two components, namely, a causal component defined
as bc = (. . . , 0, 0, b0 , b1 , b2 , . . . ) made up of all coefficients for n ≥ 0, and the anticausal component
defined as bAC = (. . . , b−2 , b−1 , 0, 0, 0, . . .) of coefficient for n < 0.
The reversal of a wavelet is obtained by reflecting a given wavelet about a fixed time index. The zero
point reversal (ZPR) of wavelet b = (. . . , b−2 , b−1 , b0 , b1 , b2 , . . .) is bZP R = (. . . , b∗ , b∗ , b∗ , b∗ , b∗ , . . .),
                                                                                                    2 1 0 −1 −2
where the complex conjugate b∗ occurs at time 0. The zero point reverse of causal finite length wavelet
                                        0
b = (b0 , b1 , b2 , . . . , bN ) is bZP R = (b∗ , . . . , b∗ , b∗ , b∗ ). It is noteworthy that the only non-zero causal
                                              N            2 1 0
coefficient in bZP R is b∗ , while the other non-zero coefficients are anticausal. The autocorrelation of
                                 0
wavelet a is given by the convolution of wavelets with its ZPR (a∗aZP R ). Similarly, the cross-correlation
of wavelet a with wavelet b is given by a ∗ bZP R .
The autocorrelation of a signal b = (. . . , b−2 , b−1 , b0 , b1 , b2 , . . .) is defined as
                                                              ∞
                                                     rk =           bj+k b∗ .
                                                                          j                                                (1.10)
                                                            j=−∞
                                                               ∗
For a causal wavelet b = (b0 , b1 , b2 , . . .), we have rk = r−k for k=0,1,. . .,N.
For example, let us consider the wavelet b = (b0 , b1 , b2 ) and its reverse as the anticausal wavelet
bR = (b∗ , b∗ , b∗ ), then a folding table as shown below could be formed:
       2 1 0
 Section 1.5. Fourier Transform                                                                    Page 4

                                                b∗
                                                 2         b∗
                                                            1          b∗
                                                                        0
                                           b0   b0 b∗
                                                    2      b0 b∗
                                                               1       b0 b∗
                                                                           0
                                           b1   b1 b∗
                                                    2      b1 b∗
                                                               1       b1 b∗
                                                                           0
                                           b2   b2 b∗
                                                    2      b2 b∗
                                                               1       b2 b∗
                                                                           0

From the table, folding diagonally, we have
                                       r−2 = b0 b∗
                                                 2                                                  (1.11)
                                       r−1 =       b1 b∗
                                                       2   +   b0 b∗
                                                                   1                                (1.12)
                                          r0 =     b0 b∗
                                                       0   +   b1 b∗
                                                                   1   +   b2 b∗
                                                                               2                    (1.13)
                                          r1 =     b2 b∗
                                                       1   +   b1 b∗
                                                                   0                                (1.14)
                                          r2 =     b2 b∗
                                                       0                                            (1.15)
                                                                                                    (1.16)
where (b0 , b1 , b2 ) are called the components of signal of the autocorrelation. Thus, it can be concluded
from definition that a wavelet and its reverse have the same autocorrelation.


1.5     Fourier Transform

We are interested in observing the effect of a FIR function on an input signal. The output, yn , of a filter
with an input wavelet or signal, xn = ei2πf n∆t , is the convolution of the filter with the input signal.
That is,
                            yn = b0 xn + b1 xn−1 + b2 xn−2 + · · · + bN xn−N .                     (1.17)
The transfer function of a filter is the ratio of its output signal to its input signal. And for N th -order
causal (FIR) filter, we have

                         output   b0 ei2πf n∆t + b1 ei2πf (n−1)∆t + · · · + bN ei2πf (n−N )∆t
               B(f ) =          =                                                             ,     (1.18)
                         input                              ei2πf n∆t
which is the same as
                             B(f ) = b0 + b1 e−i2πf ∆t + · · · + bN e−i2πf N ∆t .                   (1.19)
For example, a unit delay filter, as defined in equation 1.2, can be written as z = e−i2πf ∆t = e−iφ(f ) ,
where φ(f ) is called phase lag. In polar form, the transfer function can also be expressed as B(f ) =
|B(f )|e−iφ(f ) , where |B(f )| and φ(f ) are, respectively, the amplitude spectrum and the phase-lag
spectrum (or simply the phase spectrum) of the filter. For example, the amplitude of a two-length
vector, b = (b0 , b1 ), expressed in z-transform as B(f ) = b0 + b1 e−i2πf ∆t is

                                  |B(f )| =     b2 + 2b0 b1 cos 2πf ∆t + b2 .
                                                 0                        1                         (1.20)
The phase lag or phase of the transfer function is
                                      −b1 sin 2πf ∆t              b1 sin 2πf ∆t
                 φ(f ) = − tan−1                       = tan−1                    .                 (1.21)
                                    b0 + b1 cos 2πf ∆t         b0 + b1 cos 2πf ∆t
Consequently, the Fourier transform (or spectrum) of a two-sided wavelet, b, is
                                   ∞
                        B(f ) =          bn e−2πif ∆tn = A(f )eiψ(f ) = A(f )e−iφ(f ) ,             (1.22)
                                  n=−∞
 Section 1.6. Minimum, Mixed and Maximum Delay                                                                Page 5

where ψ(f ) and A(f ) are the phase-lead spectrum and amplitude spectrum respectively. Suppose a
sampling frequency is fs = ∆t , then e−2πifs ∆tn = e−2πin = 1. Hence,
                           1

                                                   ∞
                                  B(f + fs ) =              bn e−2πi(f +fs )∆tn = B(f ).                      (1.23)
                                                 n=−∞

The latter results shows that the spectrum is periodic within a period equal to the sampling frequency.
In a case where ∆t = 1, 2πf ∆t = 2πf = w,the Fourier transform becomes
                                                  ∞
                                      B(w) =            bn e−iwn = A(w)e−iφ(w) ,                              (1.24)
                                                n=−∞

for the Nyquist range of −π ≤ w ≤ π. However, the spectrum of the ZPR of wavelet, b, being
                                  ∞                           ∞
             B   ZP R
                        (f ) =          b∗ e−2πif ∆tk
                                         −k             =          b∗ e2πif ∆tn = B ∗ (f ) = A(f )eiφ(f ) ,
                                                                    n                                         (1.25)
                                 k=−∞                       n=−∞

implies that the ZPR of wavelet has the same phase as the wavelet but with a negative phase. The
time-step, ∆t, equals 0.004 seconds as in figure 1.2 is often used in digital sampling of seismic analogue
record and accordingly the sampling frequency is 250 hertz, where the Nyquist range is from -125Hz to
+125Hz. The sampling rate of 250 samples per second as twice the cut-off frequency 125 cycles/second
means there are only two samples per cycle at the cut-off frequency. In effect, at the sample point,
any sinusoid of arbitrary frequency is equivalent to a sinusoid with a frequency lying within the Nyquist
range. Here, “equivalent” in the sense that the two sinusoids have the same numerical value at the
sample point [RT08].




Figure 1.2: Raw data is shown on the top left, of about a half-second duration. Right shows amplitude
spectra (magnitude of FT). In successive rows the data is sampled less densely [Dra97].



1.6     Minimum, Mixed and Maximum Delay

A minimum delay wavelet has its energy concentrated near its arrival time. Conversely, the mixed
delay wavelet has its energy distributed away from its arrival time. While a delay that has its energy
concentrated at the back i.e. the reverse of a finite length minimum delay wavelet is known as maximum-
delay wavelet. For example, a-two-length wavelet, (b0 , b1 ) is called a minimum delay if |b0 | ≥ |b1 |, and
 Section 1.7. Energy                                                                               Page 6

its z-transform (b0 + b1 z) is referred to as a minimum-delay polynomial. Similarly, for |b0 | ≥ |b1 |, the
two-length reverse wavelet (b∗ , b∗ ) is a maximum-delay wavelet of a maximum-delay polynomial of the
                               1 0
form c(z) = b∗ + b∗ z.
              1    0

The example below explains the definition. A spike wavelet is a waveform with a narrow width and
sharp crest and trough at a time with respect to other times where the value of amplitude is very small.
Let us consider a unit spike, (1, 0), with a coeffcient 1 at time index 0 of Fourier transform 1 for all w.
Its amplitude spectrum is equal to a unit spike for all frequencies, i.e.

                                         1 + 0(z) = 1 + 0(e−iw ) = 1                                (1.26)

and a zero phase. For the unit delayed spike (0, 1), the Fourier transform is 0+e−iw = cos(w)−i sin(w).
Its amplitude is equally one for all frequency. But its phase-lag spectrum is

                                            − sin(w)
                                 − tan−1             = tan−1 (tan(w) = w.                           (1.27)
                                             cos(w)

Another minimum-phase wavelet, (1, 0.5), and its maximum-phase counterpart, (0.5, 1), have the same
amplitude spectrum of 1 + cos(w) + 0.25 but different phase spectra.
Relating equations 1.26 and 1.27, a minimum-delay wavelet can be formed from a non minimum-delay
wavelet of the same energy but with the phase altered.


1.7     Energy

The energy, E, of an infinite length causal wavelet, (b0 , b1 , b2 , . . .), is given by

                                           E = b2 + b2 + b2 + · · · ..
                                                0    1    2                                         (1.28)

If the coefficients of the wavelet are complex, the energy is given by

                                        E = b0 b∗ + b1 b∗ + b2 b∗ + · · · .
                                                0       1       2                                   (1.29)

Since energy is demanded to be stable in most practicable system, a wavelet needs to be of a minimum-
phase wavelet of finite length. By comparing the energy build-up curves, a plot of square of equation
1.20, it is observed that the energy build-up of the maximum delay wavelet never exceeds that of the
minimum-delay wavelet. The energy build-up curve of the mixed-delay (N + 1)-length lies between the
energy build-up curves of the minimum-delay wavelet and that of the maximum-delay (N + 1)-length.
To deconvolve a signature wavelet that is non-minimum phase wavelet, the energy of the given signature
wavelet needs to be computed, and this is used to determine a minimum phase wavelet equivalent
to the non-minimum phase wavelet of the same energy as the non-minimum phase wavelet. Before
deconvolution of seismic traces, the field data aquired from land, figure 1.3, and marine surveys are
preprocesssed. The preprocessing has the task of inputting all usual field tape formats, or demultiplexing,
and of sorting the seismic traces according to sub-surface points. After demultiplexing and gain recovery,
every seismic record is usually displayed and inspected for noisy traces, reversed traces, spikes, or other
anomalous conditions that require editing, a time-consuming but essential step to producing the best
possible final product, figure 1.3, [SR79]. In addition to the editing of actual seismic traces, all field
data can be read in as input, sorted and written out as output onto tapes as basic data for subsequent
standard processing, figure 1.5, [SR79].Then, deconvolution of traces is carried out before some standard
 Section 1.7. Energy                                                                             Page 7

processing such as basic static and dynamic correction, elevation data, shot and geophone positions,
normal movement corrections, stacking, velocity analysis, migration analyses, etc [SR79]. Subsequenly,
the deconvolved trace is gathered to common-depth point sets and seismic interpretation is then carried
out.
A seismic trace, figure 1.4, is obtained from a receiver that is made up of an array of geophones, say
100 geophones. And a seismic record, figure 1.5, is a set of traces received at different receiver location
due to a source. Thus, a number of seismic records give pieces of information at different location of a
number of seismic source (shot) [SR79].




                Figure 1.3: Overall layout of a seismic data gathering system [PBL72]




    Figure 1.4: A small portion of a seismic record such as the input record of Figure 1.5 [PBL72]
 Section 1.7. Energy                                                                             Page 8




Figure 1.5: A typical input data record of seismic traces as gathered by a system such as that of Figure
1.3 [PBL72]




                       Figure 1.6: Processed signal record of Figure 1.5 [PBL72]
Section 1.7. Energy                                                                                    Page 9




   (a) The prediction of a geometrically decaying (b) The prediction of a geometically growing wavelet.
   wavelet. The prediction is distance is 5.      The prediction distance is 5.

                                   Figure 1.7: (a), and (b) [RT08].




      (a) Raypaths for the downward pass through (b) Reverberation between the water surface and wa-
      the water layer. (For clarity, the raypaths have ter bottom interfaces
      been drawn as slanting lines, although in our
      normal-incidence model, they are perpendicu-
      lar to the two interfaces.) [RT08]

                                   Figure 1.8: (a), and (b) [RT08].
2. Wavelet Processing
Wavelet processing is the act of modelling an input signal to suit a desired output signal using suitable
transfer function of filter. The convolutional model consists of three components: the input signal, the
unit-impulse response function and the output signal. The output signal is equal to the convolution of
the input signal with the unit impulse response function [RT08]. A field wavelet is causal and hence
has a non-zero phase pulsed. It is converted to a zero phase condition by a phase-zeroing filter in the
wavelet processing step.


2.1     Shaping Filter

For every seismic trace a shaping filter, f , in terms of the input signal and the desired output signal will
be required. A design criteria for such filter uses a model that is made up of four signals:

   1. the input signal, x,
   2. the desired output signal, z,
   3. the shaping filter, f , to be designed, and
   4. the actual output signal y= f ∗ x.

The technique for the design of such a filter is based on the least squares criterion that produces a filter
which is time invariant (time invariant in the sense that the same multiple wavelet is attached to each
coefficient of the input signal) and linear.
A causal filter is a one-sided filter, that is a finite length one sided filter with the form f = (f0 , f1 , . . . fn ).
If the input to the causal filter is the signal x, the output is the signal y = f ∗ x whose coefficients are
given by the convolution
                                                           N
                                                yk =           fn xk−n .                                      (2.1)
                                                       n=0
In this design, the error is the difference between the desired output and the actual output. The value
of the error signal at time k is zk -yk . The basic idea involves minimising the energy of the error signal.
That is, the filter coefficient fk is sought that minimises the value of the error energy. The error energy
is given as
                                             I=       (zk − yk )2 ,                                    (2.2)
                                                       k
which could be written as
                                                                             2
                                          I=        zk −           fn xk−n       .                            (2.3)
                                               k               n
The index on the second summation ranges over n = 0, 1, 2, . . . , N for causal and n = −N, −N +
1, . . . , N1 , N for a non-causal filter.
Taking the partial derivative of the error energy with respect to each coefficient fm , we have
                          ∂I                                         ∂
                                =         2(zk −       fn xk−n )        (zk −            fn xk−n ).           (2.4)
                         ∂fm                       n
                                                                    ∂fm              n
                                      k


                                                           10
 Section 2.2. Spiking Filter                                                                           Page 11

And the partial derivative with respect to each coefficient fm implies the term n = m will be the only
component that will be non-zero with respect to each partial derivative fm .
                                   ∂I
                                      =              2(zk −          fn xk−n )(−xk−m )                    (2.5)
                                  ∂fm                           n
                                             k

                                = −2         zk xk−m + 2                     fn xk−n xk−m .               (2.6)
                                         k                          n    k
Recall that the autocorrelation of signal x was given as

                                         rm−n =               xk−n x∗ ,
                                                                    k−m                                   (2.7)
                                                        k

and likewise the crosscorrelation of signal z and x is

                                             gm =             z k x∗ ,
                                                                   k−m                                    (2.8)
                                                         k

where x∗                                      ∗
       k−m = xk−m for real value signal, and xk−m denotes the complex congugate of the value of
amplitude of a signal with amplitude xk−m .
Then, from equation 2.6, the partial derivative becomes
                                      ∂I
                                         = −2gm + 2                     fn rm−n .                         (2.9)
                                     ∂fm                            n

Setting the partial derivative equal to zero with a view to minimising the energy error, the following set
of linear simultaneous equations is obtained,

                                                      fn rm−n = gm .                                     (2.10)
                                                 n

Equation 2.10 is called Normal equation; a set of N + 1 simultaneous equations with m = 0, 1, 2, . . . N
in the case of a causal filter and a set of 2N +1 simultaneous equations with m = −N, . . . , N +1, . . . , N
in the case of a non-causal filter. The solution to the normal equation, equation 2.10, are the coefficients
of the filter, fn .


2.2     Spiking Filter

A shaping filter whose desired output is a spike wavelet is called a spiking filter. A spiking filter is such
that the desired output, zk , at all times is zero except at time zero. That is z = (1, 0, 0, 0, . . .); z0 = 1.
Considering a causal spiking filter with causal input (x−1 = 0, x−2 = 0, . . .), the coefficients of the
crosscorrelation, equations 2.8 and 1.13, of z and x are

                                                                    x0 , for m = 0,
                         gm =        zk xk−m = x−m =                                                     (2.11)
                                 k
                                                                    0,   for m = 1, 2, . . . , N.

The normal equations become

                                                         x0 ,       for m = 0,
                                       fn rm−n =                                                         (2.12)
                                 n=0
                                                         0,         form = 1, 2, . . . , N.
 Section 2.2. Spiking Filter                                                                             Page 12

Let h denotes a causal prediction filter for the prediction distance one, (one step unit ahead), and input
be xk , where x0 occurs at time 0. The desired output will be zk =xk+1 , where z0 =x1 occurs at time
zero. In this way, the crosscorrelation becomes

                                  gm =         zk xk−m =             xk+1 xk−m = rm+1 .                   (2.13)
                                           k                    k

The solution of the normal equations 2.13 enables the prediction error filter to be computed. From
equation 2.12, the solutions of
                                                  N −1
                                                         hn rm−n = rm+1                                   (2.14)
                                                  n=0

give the coefficients (h0 , h1 , . . . , hN −1 ) of the prediction operator. Since the predicted value is
                                                                 N −1
                                                    ˆ
                                               yk = xk+1 =              hn xk−n ,                         (2.15)
                                                                 n=0

then the prediction error is
                                                                 N −1
                                          xk − xk = xk −
                                               ˆ                        hn xk−1−n .                       (2.16)
                                                                 n=0

Thus, the coefficients of the prediction-error operator, f , for the prediction distance one (one unit time
ahead) are found with the equation

                               (f0 , f1 , f2 , . . . , fN ) = (1, −h0 , −h1 , . . . , −hN −1 ).           (2.17)

The following example reveals how the predictor-error filter is found.
Suppose that the input signal is the two-length wavelet (b0 ,b1 ), the filter coefficient is (f0 ,f1 ) and
the desired output signal is the three-length wavelet (d0 ,d1 ,d2 ). The actual output is obtained by the
convolution of the form

                         (b0 , b1 ) ∗ (f0 , f1 ) = (f0 b0 , f0 b1 , f1 b0 , f1 b1 ) = (d0 , d1 , d2 ).    (2.18)

The normal equations, in which the left hand side is in Toeplitz matrix form and the right hand side is
the column vector of crosscorrelation of the (d0 , d1 , d2 ) and (b0 , b1 , 0), equations 1.13 and 1.14, are

                                    (b2 + b2 )f0 + (b0 b1 )f1 = d0 b0 + d1 b1
                                      0    1                                                              (2.19)
                                   (b0 b1 )f0 +   (b2
                                                    0   +   b2 )f1
                                                             1       = d1 b0 + d2 b1 .                    (2.20)

Given that the minimum-phase wavelet, b = (3, 2), and the desired output is the spike, d = (1, 0, 0),
where the one (1) occurs at time zero, we need the two-length filter that will convert the input wavelet
into a unit spike. By substituting the given values, the normal equations become

                                                     13f0 + 6f1 = 3                                       (2.21)

                                                    6f0 + 13f1 = 0.                                       (2.22)
Solving the equation set equations 2.21 and 2.22 simultaneously, the least-squares spiking filter becomes

                                                                39 −18
                                               (f0 , f1 ) =        ,            .                         (2.23)
                                                                133 133
 Section 2.2. Spiking Filter                                                                                        Page 13

And normalising equation 2.23, so that its leading coefficient is one, gives the normalised spiking filter
or prediction-error filter to be (1, −0.4615).
By assuming the input wavelet to be a maximum-phase wavelet, b = (2, 3), instead of the minimum-
phase wavelet, b = (3, 2), considered above, the normal equations are

                                                  13f0 + 6f1 = 2                                                     (2.24)
                                                 6f0 + 13f1 = 0,                                                     (2.25)

for which the least-square filter becomes
                                                                 26 −12
                                                (f0 , f1 ) = (      ,    )                                           (2.26)
                                                                 133 133
and the prediction error filter is (1,-0.4615). The predictor-error filter is the same for both the minimum-
phase wavelet and the maximum-phase wavelet owing to the same autocorrelation had by both wavelets.
In general, each wavelet with the same autocorrelation, (irrespective of whether the wavelet is minimum
phase, maximum-phase or mixed phase), is associated with the same prediction-error filter [RT08].
The above statement makes the predictor-error filter to be computed easily in the following example:
Given that the input wavelet is (b0 , b1 ) = (3, 2) and the desired output, amplitude at a unit step ahead,
is d = (2, 0). We need a predictor filter (for one time unit ahead) of the form (h0 , h1 , h2 , . . .). Thus,
the actual output as the convolution of the input wavelet with the filter, 2.1, becomes

             (b0 , b1 ) ∗ (h0 , h1 , h2 , . . .) = (h0 b0 , h0 b1 + h1 b0 , h2 b0 + h1 b1 , h2 b1 ) ≈ (d0 , d1 ),    (2.27)

and the normal equation, equations 2.1, 1.13 and 2.8, will be

                                             (b2 + b2 )h0 = d0 b0 + d1 b1 .
                                               0    1                                                                (2.28)

From the normal equation 2.28,
                                                        13h0 = 6.                                                    (2.29)
The prediction-error filter then becomes

                                       (f0 , f1 ) = (1, −h0 ) = (1, −0.4615).                                        (2.30)

Similarly, if the input is a maximum-phase wavelet b = (2, 3), then the desired output, a unit time step
ahead, is d = (3, 0). Accordingly, the normal equation 2.28 becomes 13h0 = 6 as before, thus, the
same prediction-error filter, (f0 , f1 ) = (1, −0.4615) will be applied.
The later results show a prediction filter, equation 2.29, of a unit time step ahead can be designed by
equating the autocorrelation of the wavelet in toeplitz matrix form, as the left hand side of the normal
equation, to the crosscorrelation of the minimum delay wavelet with the same wavelet advanced by a
unit time step ahead, equations 2.28 and 2.29. Then, the prediction-error filter is the spiking filter set
in the form of filter equation 2.30. And as shown, for any maximum delay or a mixed delay wavelet of
the same autocorrelation as that of the minimum delay wavelet, the spiking filter will be the same when
normalised, equations 2.23 and 2.26.
 Section 2.3. White Convolutional Model                                                            Page 14

2.3     White Convolutional Model

This is a convolutional model for the seismic trace x, in which x= w ∗ , where is the white reflection
coefficient series and w is the field wavelet. The field wavelet is often represented as w=s∗i∗a∗m,
where s, i, a and m are respectively source wavelet, response of the instrument, absorption filter and
multiple reflection impulses (multiples). It is conventional to refer to the lump of actual source wavelet,
instrumentation response and near surface effect as signature wavelet.


2.4     Wavelet Processing

Wavelet processing involves processes such as signature deconvolution, predictive deconvolution fre-
quency and wave number filtering, stacking and other ancillary operations to reduce multiple and in-
crease the signal-to-noise ratio, of the the trace x. Due to these processes, the trace is at first converted
to a dephased trace of the form y = g ∗ , which is an improved trace. The basic objective of wavelet
processing is to transform the dephased wavelet, g, into a sharp zero-phase interpreter wavelet h. A
two-sided shaping filter, f , is needed to transform the input, g, to h, output. That is, g ∗ f ≈ h. This
shaping filter is then applied to the dephased trace to yield as an output an interpreter trace, z, which
is given by
                                       z = y ∗ f = (g ∗ ) ∗ f = h ∗ .                                 (2.31)
From equation 2.31, an interpreter trace consists of interpreter wavelet attached to each of the computed
reflection coefficient in the reflectivity function. For a sharp interpreter wavelet, an interpreter wavelet
should have a broad amplitude spectrum and a zero-phase spectrum. For a given input signal, its
full amplitude and phase reverse is an ideal inverse filter needed for most filtering operations. Also, a
band pass filter is often employed to transform the input signal into a zero phase band pass signal or a
minimum-phase band pass signal [RT08].


2.5     All-Pass Filter

A phase-shift filter is a filter that operates (acts) only on the phase of a signal owing to the property
that it has a flat amplitude spectrum. When normalised, its flat amplitude spectrum is equal to unity.
In other words, a phase shift filter does not affect the amplitude of a signal but changes the phase of
the phase spectrum of a signal. The autocorrelation of a phase-shift filter which is the convolution of
the phase-shift filter with its time reverse (time = 0) produces a unit spike. Thus, the inverse of a
phase-shift filter is equal to its time-reverse [RT08].
A one-sided filter whose amplitude spectrum is flat and equal to unity is referred to as all-pass filter.
Since the amplitude spectrum and phase spectrum represent respectively the gain and delay of a signal,
the amplitude spectrum of an all-pass system is one for all frequencies. And the logarithm of its gain
is zero for all frequencies. This means that when a signal is passed into an all-pass filter, the phase
of the all-pass filter is added to the phase spectrum of the signal, while its amplitude remains the
same. However, a corresponding inverse all-pass filter transforms a non-minimum phase wavelet into a
minimum-phase wavelet by subtracting its phase from the phase spectrum of the signal.
 Section 2.6. Convolutional Model                                                                        Page 15

2.6     Convolutional Model

Most source wavelets produced by air-guns during seismic work are low amplitude source signals with
a non-minimum phase. Source wavelets, ghosting effects produced by the interfaces and other near
surface effects consitute a waveform called signature wavelets. Signature deconvolution, a process of
removing the signature wavelet from the seismic trace by means of the inverse of the signature, provides
a way of eliminating source wavelets which are often known either by direct measurement, or by an
estimation technique.
Given that the signature wavelet is causal and a non-minimum phase, and x, s, m and represent a
seismic trace, the source or signature wavelet, multiple response and reflectivity series respectively, the
recorded trace approximately could be said to be a linear time-invariant convolutional model of the form

                                             x=s∗m∗ .                                                     (2.32)

If z denotes the signature-free trace, i.e. z=m ∗ , then x=s ∗ z.
In other words, when a seismic field trace is an input signal into a signature deconvolution filter, the
desired output is a signature-free-trace.


2.7     Non Minimum-Delay Wavelet

The canonical representation, a key idea in signature deconvolution, states that a non-minimum delay
wavelet, s, is equal to the convolution of its minimum delay counterpart, b, (i.e a wavelet with the
same amplitude spectrum as the non-minimum delay wavelet, s, but with a minimum-phase spectrum),
and an all-pass wavelet, p, which has a white (i.e. flat or constant magnitude spectrum). The all-pass
carries the extra phase [RT08]. The canonical representation of a signature wavelet is given as

                                                s = b ∗ p.                                                (2.33)

Thus, the problem is to find the two components of the signature, b and p, having known the signature,
s. The energy spectrum, ψ(w), of a wavelet is the plot of the product of the amplitude and its complex
conjugate values for various values of frequencies. Following Fourier forward transformation, in which
the amplitude at a particular frequency can be represented as the summation of values of amplitudes at
various times of delay. Then, we have
                        N                                                              N
               s(w) =         sm e−iwm and the complex conjugate s(w)∗ =                     sn eiwn .    (2.34)
                        m=0                                                            n=0

When terms in equation 2.34 are combined, we have
                                                     N               N
                            |s(w)|2 = s(w)s(w)∗ =         sm e−iwm         sn eiwn ,                      (2.35)
                                                    m=0              n=0
 Section 2.7. Non Minimum-Delay Wavelet                                                               Page 16

where s(w) is the Fourier forward transformation of the signature. From equation 2.35, setting m−n = k,
the energy spectrum becomes
                                                   N    N
                                    ψ(w) =                     sm sn e−iw(m−n)                          (2.36)
                                                  m=0 n=0
                                                   N         N
                                                        −iwk
                                             =             e                sn+k sn .                   (2.37)
                                                  k=−N             n=0


But the autocorrelation of a digitised amplitude at various time for a wavelet is defined as, set of
equation from equation 1.11 to equation 1.15,
                                                       N
                                               rk =          sn+k sn ,                                  (2.38)
                                                       n=0

and noting that it is symmetric, that is, rk = r−k , then equation 2.37 can be written as
                                                        N
                                           ψ(w) =              e−iwk rk .                               (2.39)
                                                       k=−N


The above implies if given the signature s, the energy spectrum and autocorrelation of the wavelet can
easily be computed. From the foregoing, there is enough information to compute the minimum-delay
counterpart of the non-minimum-delay signature.
Firstly, the z-transform of the energy spectrum is expressed as
                                                           N
                                              ψ(z) =             z k rk .                               (2.40)
                                                        k=−N

From equation 2.40, z N ψ(z) is a polynomial of degree 2N . Due to the symmetric property of autocor-
relation, it follows that z N ψ(z) = 0 if, and only if z N ψ(z −1 ) = 0. That is, z is a root of the polynomial
if, and only if z −1 is a root. For example, given the autocorrelation (3, 10, 3) where the centre point
10 is at time index 0, z-transform of the autocorrelation is ψ(z)=3z −1 + 10 + 3z. And the polynomial
                                                                                        1
zψ(z) = 3 + 10z + 3z 2 , which is factored as (z + 3)(3z + 1) has roots -3 and - 3 .
Assuming that there are no roots of modulus one, then there are 2N roots in all. N roots will have
modulus less than one and N roots will have modulus greater than one.
Secondly, the polynomial

                        bN (z − z1 )(z − z2 ) . . . (z − zN ) = b0 + b1 z + . . . + bN z N              (2.41)

could be formed where the constant bN is determined by requiring the wavelet b to have the same energy
as the signature s. That is, the constant bN is determined by requiring that

                                 b2 + b2 + . . . + b2 = s2 + s2 + · · · + s2 .
                                  0    1            N    0    1            N                            (2.42)

Thus, b is the desired minimum-delay counterpart of the signature s.
 Section 2.7. Non Minimum-Delay Wavelet                                                           Page 17

Furthermore, the inverse b−1 =f = (f0 , f1 , f2 , . . .) of the minimum delay counterpart b can be obtained
by carrying out the polynomial division
                                        1
                                                        = f0 + f1 z + f2 z 2 + . . .                (2.43)
                             b0 + b1 z + . . . + bN z N
Re-writing equation 2.43, with a view to describing a computing method in order to obtain the coefficient
fi , we have
                                             N              ∞
                                                        m
                                                 bm z             fn z n = 1.                       (2.44)
                                          m=0               n=0
If we let n = k − m and re-arranging terms, we have
                                         ∞       N
                                                        bm fk−m       z k = 1.                      (2.45)
                                         k=0     m=0

For equation 2.45 to hold,
                                     N
                 b0 f0 = 1 and           bm fk−m = 0 for k = 1, 2, 3 . . . , where n ≥ 0,           (2.46)
                                   m=0

then f0 =b−1 .
          0    This value is, then, substituted into equation 2.46 for k = 1 and solve the equation for
f1 . Then, these values,f0 , f1 are put in equation 2.46 for k = 2 and solve the equation for f2 . This
process of backward substitution is continued to obtain the inverse f = b−1 .
Consequently, the expression
                                  f ∗ s = f ∗ (b ∗ p) = b−1 ∗ (b ∗ p) = p,                          (2.47)
is used to obtain an expression for the all-pass filter p. Having found the all-pass filter operator, its
inverse is simply computed by reversing the order of coefficients with respect to time index 0, i.e.,
                                                     p−1 = pR .                                     (2.48)
Following equation 2.47, the canonical representation of the inverse of the signature, s=b∗p, is computed
and the components of the inverse of the signature is computed as
                                  s−1 = (b ∗ p)−1 = b−1 ∗ p−1 = f ∗ pR .                            (2.49)
The following algorithm was used to design a computer programme that computes the minimum phase
wavelet counterpart of a non-minimum phase wavelet with the same energy.

   1. obtain the spiking filter from the non-minimum wavelet using the normal equations,
   2. compute the all-pass filter in the unnormalised form by convolving the obtained spiking filter with
      the non-minimum wavelet,
   3. normalise the all-pass filter to unity by dividing each element in the list with the norm of the
      all-pass. for example, a vector x=(x1,x2,x3) has norm of x=sqrt(x12 + x22 + x32 ) ,
   4. the minimum equivalent of the non-minimum wavelet is obtained by multiplying the inverse of
      spiking filter with norm of the all-pass filter.

The convolution of the minimum wavelet with normalised all-pass filter gives the non-minimum wavelet.
Thus, the obtained minimum wavelet has the same energy as the non-minimum wavelet.
 Section 2.8. Signature Deconvolution                                                             Page 18

2.8     Signature Deconvolution

From the model x=s ∗ z, the desired quantity is the signature-free trace z. Signature deconvolution
involves the following steps:

  1. Given the signature, s, we compute the least squares spiking filter, f , (a wavelet that is the inverse
     of the minimum phase wavelet inherent in the non-minimum phase signature).

  2. The all-pass filter, p, is obtained by convolving the spiking filter, f , with the signature, s, i.e.
     p = f ∗ s.

  3. Convolve the field trace, x, with the reverse pR of the all-pass filter to obtain the dephased trace
     y, i.e., we compute y = x ∗ pR .

  4. The spike filter, f , is then convolved with the dephased trace, y to obtain the signature-free trace,
     z, that is, z = y ∗ f .

In other words, the following computations are needed to be carried out

                                                                x = s ∗ z,                          (2.50)
                                           −1        −1
                                 f ∗s=b         ∗s=b      ∗ (b ∗ p) = p,                            (2.51)
                                              R        −1
                                              p ∗x=p        ∗ s ∗ z = y,                            (2.52)
                                         −1            −1
                           f ∗y =f ∗p         ∗s∗z =s       ∗ s ∗ z = z.                            (2.53)

However, the signature-free trace is not free of reverberating energy, and due to the presence of this
unwanted quantity, predictive spiking deconvolution (a process of designing a spiking filter, which is the
inverse of the minimum-phase wavelet that constitute the signature-free trace and convolving the spiking
filter with the signature-free trace) is applied. A trace of an estimate of the approximate reflectivity, ,
is obtained after predictive spiking deconvolution is applied to the signature free trace.
Section 2.8. Signature Deconvolution                                                                               Page 19




   (a) (a) Minimum-delay input and desired output for a       (b) (a) Non-minimum-delay input (i.e., mixed-delay
   prediction distance of 1. (b) The prediction filter for     input) and desired output for a prediction distance of
   a prediction distance of 1 and the resulting prediction.   1. (b) The prediction filter for a prediction distance of
   (c) The prediction-error filter for a prediction distance   1 and the resulting prediction for the mixed-delay in-
   of 1 and the difference between desired output and          put. (c) The prediction-error for a prediction diatance
   prediction.                                                of 1 and the difference between desired output and
                                                              prediction.

                                       Figure 2.1: (a), and (b) [RT08].
3. Deconvolution
In deconvolving a digitised seismic trace by discrete sampling a continuously varying function with time
at uniform time step, say 4ms, a mathematical method that involves the designing of a least-squares
filter is employed to convert an input signal into a desired output signal. And the design of such a
least-squares filter requires two major processes, namely, the autocorrelation of the input signal and
the crosscorrelation of the desired output signal with the input signal. Typically, the least-squares error
criterion is employed in deconvolution of a seismic trace. In deconvolving a seismic trace, there is a need
for a model that describes the internal structure of a seismic trace. The trace could be represented as
the convolution of a reflectivity series with a seismic wavelet. In other words, methods of deconvolution
are based on the convolution model of the seismic trace in which an estimate of reflectivity series from
the recorded seismic trace is recovered.
With a view to identifying closely spaced reflectivity boundaries, the wavelet will need to be removed
from the trace to yield the reflectivity series. Thus, the act of removing is the opposite of the convolution
process called inverse filtering or deconvolution [RT08].
In deconvolving a trace, the conventional assumption is that each wavelet has the same minimum-phase
shape and that the arrival times and strength of each wavelet are given by the reflectivity series of
uncorrelated random variables equations 1.3, 1.4 and 1.5. Quite often, an operator referred to as a
spiking operator will be used to deconvolve or remove the wavelet from the trace, there by yielding
the reflectivity series. And the inverse of the spiking operator yields a minimum-delay wavelet shape.
Notably, a prediction-error filter with a prediction distance equal to one unit time is referrred to as
a spiking deconvolution f ilter. However, deconvolution of reverberation in the form of a mixed-
delay wavelet is carried out with gap deconvolution f ilter designed with a least-squares prediction-
error filter, but with a prediction distance greater than unity (one). Moreover, other forms of filters
can be derived from a minimum-plane wavelet. A wavelet can be sectioned into two parts: namely,
wavelet s Head i.e. the part that consists of the wavelet’s first α coefficient, and wavelet s T ail,
i.e. the other part that consists of all other coefficient beyond the α coefficient. More so, A Head −
shaping operator is shaping filter that shapes the wavelet to its head, while a shaping operator that
shapes a wavelet to its tail is called A T ail shaping operator. The head − shaping operator is the
same as the prediction-error operator for the trace with a prediction distance α [RT08].


3.1     Seismic Deconvolution

This is a method adopted to remove unwanted effects produced by the earth in the form of absorption,
reverberation, (multiples), ghosting, seismic source and receiver responses. Consider a convolutional
model x = s ∗ a ∗ m ∗ i ∗ ε where s, a, m, i and ε are seismic source (or signature), intrinsic seismic
absorption operator, multiple reflection coefficient, and instrument responses and reflectivity respectively.
Since a and i can be measured, they are removed alongside the signature during signature deconvolution.
Thus, the model becomes m ∗ ε = z (signature free trace), a model on which seismic deconvolution is
based. In this case, the aim of deconvolution is to remove m in order to have the ideal seismogram, ε.




                                                    20
 Section 3.2. Least-squares Prediction and Smoothing                                                   Page 21

3.2     Least-squares Prediction and Smoothing

Signals that distort or corrupt a good observation of the desired quantity is referred to as noise. And
signals are corrupted in form of a mixture, possibly, additively with other signals.
Basically, in the design of an actual digital filter, it is necessary to build a model. And for the
design of a time-honoured least-squares model, three signals are required, namely, the input sig-
nal, x=(x0 , x1 , x2 , . . .), the actual output signal, y=(y0 , y1 , y2 , . . .) and the desired output signal,
z=(z0 , z1 , z2 , . . .). And for a finite -impulse response (FIR) filter of length N , an actual digital fil-
ter k will have a finite number of coefficients given by k=(k0 , k1 , k2 , . . . , kN −1 ).
The output of the convolution of the filter with the input signal, i.e., y = k ∗ x, say yn at time n is
given by the discrete convolution expression of the form


                                yn = k0 xn + k1 xn−1 + . . . + kN −1 xn−N +1 .                            (3.1)

The difference between the values of the desired output, zn , and the actual output output, yn , at time
instant n is given by the error en = zn − yn .The quantity


                                             I=          (zn − yn )2                                      (3.2)
                                                    n

is the error energy. An optimum filter, f , in the least-squares method is one that minimises the value of
the error energy. And equating the partial derivative of equation 2.3, with respect to each of the filter
coefficients, to zero minimises the error energy. The minimisation of the error energy yields a system of
N linear simultaneous equations referred to as Normal equations, in agreement with equation 2.10. In
matrix form, equation 2.10 becomes
                                                                        
                             r0     r1      . . . rN −1      k0          g0
                           r1      r0      . . . rN −2   k1   g1 
                                                    . . .  =  . .                              (3.3)
                                                                        
                           .        .        .
                           . .      .
                                     .        .
                                              .     .   .   . 
                                                    .         .           .
                             rN −1 rN −2 . . .          r0        kN −1    gN −1 .

From equation 3.3, the autocorrelation, rn is a known quantity of the input signal as well as gn , which
is the positive phase lag value of the cross-correlation of the desired output signal with input signal.
Unknown quantities are the values of the filter coefficients, kn . The autocorrelation of the input signal,
xn , is
                                           rn =       xi+n xi ,                                     (3.4)
                                                        i

and cross-correlation between desired output and the input is

                                              gn =           zi+n xi .                                    (3.5)
                                                         i

The solution of the normal equation 3.3 yields the filter coefficients. It should be observed that the
square matrix in the normal equations has the autocorrelation coefficients arranged in T oeplitz f orm.
In other words, all the auotcorrelation coefficients along any main diagonal or subdiagonal are the same.
Due to the T oeplitz f orm structure, the Lewinson algorithm can be used to find the solution of the
normal equations [RT08].
 Section 3.3. The Prediction-Error Filter                                                         Page 22

To obtain the left column matrix of equation 3.3, it is important to note that the filter uses the input
past’s value to predict the signal’s future values. That is, the input is the signal xn , and the desired
output, zn , is the time-advanced version of the input, xn , by the prediction distance α (a positive
integer), figure 1.7(a)(i). At future time n + α, the desired output zn is the input xn+α . The filter is
constructed in such a way that its output yn at the present time n is an optimum estimate of the future
                                           ˆ
value xn+α . If the estimate is denoted by xn+α , the filter’s action can be represented by the convolution

                              ˆ
                              xn+α = k0 xn + k1 xn−1 + . . . + kN −1 xn−N +1 ,                       (3.6)

i.e. the prediction operator acts as an input up to time n, figure 1.7(a)(iii), and estimates its value at
some future time n + α figure 1.7(a)(ii).
The cross-correlation between the desired output and the input is given by the positive lag coefficient
of the cross-correlation between the time-advanced trace and the trace as

                                         gn =         xi+n+α xi = rn+α .                             (3.7)
                                                  i

This means the cross-correlation between the desired output and the input is equal to the autocorrelation
of the input for lags greater than or equal to α. Normal equation 3.3, for the prediction filter, becomes
                                                                     
                                r0    r1    ...    rN       k0        rα
                               r1    r0    . . . rN −1   k1   rα+1 
                                                    . . .  =  .                                (3.8)
                                                                     
                               .      .      .
                               ..     .
                                       .      .
                                              .     .   .   . 
                                                    .        .         .
                                rN    rN −1 . . .       r0        kN          rN +α ,

and the solution of equation 3.8 gives the coefficients of the optimum predictionf ilter being sought.


3.3     The Prediction-Error Filter

With the knowledge of a prediction filter, a prediction-error filter can be obtained easily. Prediction-error
series is the difference between true value xn+α , figure 1.7(a)(ii), and the estimated or predicted value
ˆ
xn+α , figure 1.7(a)(iv). The prediction-error at time instant n + α, figure 1.7(a)(vi), is thus

                 n+α   = xn+α − xn+α = xn+α − k0 xn + k1 xn−1 + . . . + kN −1 xn−N +1 .
                                ˆ                                                                    (3.9)

Replacing n + α by n in equation 3.9, the prediction-error at present time n is

                   n   = xn − xn = xn − k0 xn−α + k1 xn−α−1 + . . . + kN −1 xn−α−N +1
                              ˆ                                                                     (3.10)

, where the prediction error n is a signal that represents the non-predictable part of xn . Thus, equation
3.10 reveals that the suitable prediction-error filter, figure 1.7(a)(v), becomes

                                f = (1, 0, 0, . . . , 0, −k0 , −k1 , . . . , −kN −1 ).              (3.11)

The diagrams in figure 1.7(b)(i-vi) are the non-minimum wavelet counterpart of the minimum wavelet
in figure 1.7(a)(i-vi). When figure 2.1(a)(a) on the right hand side (R.H.S) is compared to figure 2.1(a)
(b) (R.H.S) , it should be observed that the values of the non-negative side of the prediction coincides
with that of the desired output. However, the values of the non-negative side of the prediction do
 Section 3.3. The Prediction-Error Filter                                                               Page 23

not agree with that of the desire output in figure 2.1(b)(a) (R.H.S) and figure 2.1(b)(b) owing to the
fact that the prediction is obtained by passing 2.1(a)(b) (R.H.S) through an all-pass filter. Also, It
should be noted that the prediction filter is the same for an input minimum delay wavelet and an input
non-minimum delay as in figures 2.1(a)(b) on the left hand side (L.H.S) and 2.1(b)(b) (L.H.S) . But,
the prediction-error filter in figure 2.1(b)(c) (R.H.S) is the same as the prediction-error filter in figure
2.1(a)(c) (R.H.S) that has been passed through an all-pass filter. It is important to note that there
are α − 1 zeros in the prediction-error operator that lie between the leading coefficient, namely, 1, and
the negative prediction-operator coefficients. α − 1 constitutes the gap. If δ0 = (1, 0, 0, . . .) denotes
the zero-delay unit spike, (where the 1 occurs at time instant 0) and δα = (0, 0, 0, . . . , 1) denotes the
unit spike for delay α, (where there are α − 1 zeros before prediction-error operator 1), the z-transform
of the zero-delay unit spike and unit spike for delay α are ,respectively, 1 and z α . In this way, the
prediction-error operator is the difference between the zero-delay unit spike and the prediction operator
delayed by the prediction distance α.This means

                                                       f = δ0 − δα ∗ k.                                  (3.12)

Then, the prediction operator is converted to a prediction-error operator by subtracting the left-hand
side of equation 3.8 from its right-hand side. Thus,

                                                                                        
                             rα                 r0         r1    ...    rN −1       k0        0
                     rα+1                    r1         r0    ...    rN −2     k1    0
                             −                                          . .           = . ,       (3.13)
                                                                                      
                         .
                         .                       .
                                                 .          .
                                                            .     .
                                                                  .       .        .
                                                                                     .    .
                    
                        .                     .          .     .       .          .        .
                     rN −1+α    rN −1 rN −2 . . .                        r0      kN −1        0
which is equivalent to
                                                                             
                                  rα           r0      r1       ...    rN −1
                                                                                         
                                                                                  1       0
                          rα+1                r1      r0       ...    rN −2   −k0  0
                                                                            
                            .
                             .                  .
                                                .       .
                                                        .        .
                                                                 .       .   −k  0
                                                                         . .
                             .                  .       .        .       .        1 =               (3.14)
                                                                                        
                         
                             .                  .       .        .              .  .
                                                                         .   .  .
                         
                            .
                             .                  .
                                                .       .
                                                        .        .
                                                                 .       . 
                                                                         .        .       .
                          rα+N −1 rN −1 rN −2                   ...     r0      −kN −1    0

and could be re-written as
            rα        rα−1 . . . r1                   r0        r1     ...    rN −1 
                                                                                   
                                                                                                  
         rα+1                                                                             1       0
                       rα  . . . r2                   r1        r0     ...    rN −2  
        
             .          .    .    .                    .         .      .       .  
                                                                                          0  0
            .
             .          .
                        .    .
                             .    .
                                  .                    .
                                                       .         .
                                                                 .      .
                                                                        .       .   .  .
                                                                                .
                                                                                          .  .
            .
             .          .
                        .    .
                             .    .
                                  .                    .
                                                       .         .
                                                                 .      .
                                                                        .       .   .  .
                                                                                .  
             .          .    .    .                    .         .      .       .
                                                                                               
                                                                                     .  0  = 0 .
                                                                                                 
            .          .    .    .                    .         .      .       .   −k  0           (3.15)
        
            .
             .          .
                        .    .
                             .    .
                                  .                    .
                                                       .         .
                                                                 .      .
                                                                        .       .  
                                                                                .            0    
            .
             .          .
                        .    .
                             .    .
                                  .                    .
                                                       .         .
                                                                 .      .
                                                                        .       .   −k1  0
                                                                                .  
            .          .    .    .                    .         .      .       .   .  .     
                                                                                .   .  .
        
            .
             .          .
                        .    .
                             .    .
                                  .                    .
                                                       .         .
                                                                 .      .
                                                                        .       .         .       .
            .          .    .    .                    .         .      .       .
                                                                                         −kN −1    0
          rα+N −1 rα+N −2 . . . rN                   rN −1 rN −2 . . .         r0

Now, we define ρ0 , ρ1 , . . ., ρα−1 with the equation
 Section 3.4. Spiking Deconvolution                                                                       Page 24


               r0   r1    . . . rα−1 rα rα+1 . . .            rα+N −1              ρ0
                                                                                     
                                                                                
                                                                           1
              r1   r0    . . . rα−2 rα−1 rα . . .            rα+N −2           ρ1 
                                                                                       
                .    .      .     .    .   .   .                 .      0   . 
                                                                     
               .
                .    .
                     .      .
                            .     .
                                  .    .
                                       .   .
                                           .   .
                                               .                 .
                                                                 .      .   .   .
                                                                        .   . 
          
                .    .      .     .    .   .   .                 .      .   . 
          
          
               .
                .    .
                     .      .
                            .     .
                                  .    .
                                       .   .
                                           .   .
                                               .                 .
                                                                 .      0   . 
                                                                        −k0  =  .  .
               .    .      .     .    .   .   .                 .    .                                 (3.16)
          
               .
                .    .
                     .      .
                            .     .
                                  .    .
                                       .   .
                                           .   .
                                               .                 .
                                                                 .              . 
                                                                                     .
               .    .      .     .    .   .   .                 .      −k1   . 
                .
                .    .
                     .      .
                            .     .
                                  .    .
                                       .   .
                                           .   .
                                               .                 .
                                                                 .                . 
                                                                        .   . 
                                                                      
                                                                        .   . 
          
               .
                .    .
                     .      .
                            .     .
                                  .    .
                                       .   .
                                           .   .
                                               .                 .
                                                                 .         .       . 
               .    .      .     .    .   .   .                 .                  .
                                                                         −kN −1
           rα−1 rα−2      . . . r0    r1  r2 ...               rN +1               ρα−1
Combining equation 3.15 and 3.16, we have


                                                                
          r0         r1      . . . rα−1  rα  rα+1 . . . rα+N −1
                                                                                
                                                                      1        ρ0
      
         r1         r0      . . . rα−2 rα−1  rα  . . . rα+N −2   0   ρ1 
                                                                 
          .
           .          .
                      .        .
                               .     .
                                     .    .
                                          .    .
                                               .    .
                                                    .      .
                                                           .      .   . 
                                                                  .   . 
          .          .        .     .    .    .    .      .      .   . 
                               .                    .
                                                                               
      
       rα−1    rα−2           .
                               .    r0   r1   r2    .
                                                    .    rN +1  .  0  = ρα−1  ,
                                                                
                                                                                                         (3.17)
                                                          rN   −k0   0 
                                                                
       rα      rα−1         . . . r1    r0   r1  ...
                                                                                
                                                         rN −1   −k1  
      
       rα+1
                                                                           0 
                rα          . . . r2    r1   r0  ...             .   . 
         .
          .       .
                  .            .
                               .     .
                                     .    .
                                          .    .
                                               .    .
                                                    .      .
                                                           .      .   . 
                                                                      .         . 
         .       .            .     .    .    .    .      .    
       rα+N −1 rα+N −2       . . . rN rN −1 rN −2 . . .   r0        −kN −1     0

and the column vector on the Left hand side of equation 3.17, is the prediction-error operator
f = δ0 − δα ∗ k with prediction distance α.This type of operator performs what is referred to as
gap deconvolution.


3.4       Spiking Deconvolution

Based on the standard convolutional model which states that a seismic trace is the convolution of
a minimum-phase wavelet with a white reflectivity, the broad idea of the convolutional model always
begins with a way to relax either the minimum-phase assumption or the white-reflectivity assumption or
both. Given that x=(x0 , x1 , x2 , . . .) is a seismic trace, b = (b0 , b1 , b2 , . . .) is a minimum-phase wavelet,
and = ( 0 , 1 , 2 , . . .), mathematically a convolutional model becomes

                                                                     ∞
                                   xn = b0 εn + b1 εn−1 + · · · =         bi εn−i .                          (3.18)
                                                                    i=0

Basic steps required to carry out deconvolution are as follow: Having acquired the trace in form of
seismic data, the autocorrelation function of this trace is computed. The trace autocorrelation function
averages out the random uncorrelated reflectivity series owing to the reflectivity being white (i.e. flat
magnitude spectrum). Due to this, the trace’s autocorrelation is the same function within statistical
bounds as the wavelet’s autocorrelation. In other words, given a seismic trace the autocorrelation of the
wavelet can be estimated and one only need to deal with a wavelet and not the entire trace in designing
an operator. More importantly, the above implies once the spiking operator is found, any linear filter
 Section 3.5. Gap Deconvolution                                                                            Page 25

of a given length can be computed easily within the limits of computational accuracy. And once the
minimum wavelet is known, it could be removed from the trace by deconvolution, the left over then is
the random reflectivity series. For a unit prediction distance, i.e, α = 1, the prediction equation 3.10
becomes
                              ˆ
                              xn = k0 xn−1 + k1 xn−2 + . . . + kN −1 xn−N ,                      (3.19)
and given an input, b, the autocorrelation of the wavelet is the same as the that of the trace, r.
With a view to obtaining the coefficients of the prediction operator for the unit prediction distance for
the wavelet, let set α = 1. The normal equations 3.8 become
                                                                         
                               r0        r1     . . . rN −1          k0         r1
                             r1         r0     . . . rN −2   k1   r2 
                            . . . . . . . . . . . . . . . . . . . . . . =  . .  .          (3.20)
                                                                         

                             rN −1 rN −2 . . .           r0        kN −1       rN

The prediction operator for a unit prediction distance of coefficient, k0 , k1 , . . ., kn−1 are components
of the solution to the normal equations 3.20. Relative to equation 3.11, the prediction error operator
for the prediction distance of 1 is
                              (a0 , a1 , a2 , . . . , aN ) = (1, −k0 , −k1 , . . . , −kN −1 ),                (3.21)
which agrees with the earlier example in equation 2.30, where a = (1, a1 , a2 , . . . , aN ) is a spiking operator
or a spiking filter normalised so that its leading coefficient is 1, and it is required to be minimum-phase.
The convolution model for a trace, x, has two components; the minimum-phase wavelet and white
reflectivity. The spiking operator provides a means to obtain both components. It should be noted that
the inverse of the spiking operator is the minimum-phase wavelet. The wavelet can be derived from a
spiking filter a with the relation a ∗ b = δ0 , with respect to equation 2.45 which is
                                                    M
                           a0 b0 = 1, where              ai bn−i = 0, f or n = 1, 2, 3, . . .                 (3.22)
                                                   i=0

Equation 3.22 informs us that the minimum-phase wavelet b is the inverse of the spiking filter a (i.e.
b = a−1 ).The spiking filter, thus, yields the reflectivity ε with
                                           y = x ∗ a = b ∗ ε ∗ b−1 = ε,                                       (3.23)
called spiking deconvolution, where b and ε are the minimum-phase wavelet and a white respectively.
Furthermore, let us imagine that the input is b, the filter is q and the desired output is z. Then the
required filter will have an impulse-response q = z ∗a. The inference drawn above is tested by computing
the output of the filter as b ∗ q = b ∗ (z ∗ a) ≈ z.
For example, if the least squares shaping filter is computed from the input b = (1, 0.5) in order to obtain
the desired output z = (0.3, 1), the shaping filter is f = (0.3012, 0.8469, −0.4185, 0.1993, −0.7971).
The least-squares spiking filter computed by solving equation 3.8 gives a = (0.9971, −0.4927, 0.2346, −0.09384),
which when convolved with the desired output gives a∗z = f = (0.2991, 0.8493, −0.4223, 0.2065, −0.09384).


3.5     Gap Deconvolution

Every minimum-phase wavelet b could be split into two parts about a particular coefficient, α. For a
wavelet b = (b0 , b1 , b2 , . . . , bα−1 , bα , bα+1 , bα+2 , . . .), its head, h, is made up α coefficients with b0 in
 Section 3.6. Piece Meal Convolutional Model                                                   Page 26

time spot 0, and its tail t is the second part of the wavelet that consists of the remaining coefficients
advanced in time so that bα occurs at time spot 0.
There are basically three ways to compute the gap-deconvolution operator, re-written as h + δα ∗ t.
The head-filtering method, tail-shaping method and head-shaping method. Following the definition
of the head and tail, the desired output of the prediction filter is the tail of the minimum-phase
wavelet.Therefore, it is appropriate to convolve the spiking filter with the tail in order to obtain the
prediction filter. That is
                                              ∞
                          k = a ∗ t,   ki =         bα+s ai−s where i = 0, 1, 2, . . .           (3.24)
                                              s=0

corresponding to the prediction-error filter of equation 3.12 in the form
                                           f = δ0 − δα ∗ a ∗ t,                                  (3.25)
where the head of the wavelet gives what could be referred to as unreachable prediction error for the
prediction-error filter.
Alternatively, if δ0 = a ∗ b, the prediction-error filter can be expressed as
                               f = a ∗ b − δα ∗ a ∗ t = a ∗ (b − δα ∗ t)                         (3.26)
                                                             h = b − δα ∗ t                      (3.27)
                                                                f = a ∗ h,                       (3.28)
                                                                                                 (3.29)
where h is the head and f is the prediction-error filter.


3.6     Piece Meal Convolutional Model

In this model, an observed seismogram or trace is subdivided into a time interval called a window
(corresponding division of the reflectivity function) figure 1.4. Boundaries of each window are obtained
by the arrival time of major primary reflections. Between major reflections, the interface waves are
approximately constant. In other words, between major reflections, the time varying convolutional
model reduces to time invariant model. And at each boundary, the dynamic structure of seismic trace
change. Accordingly, the following dynamic model hold within each window;
(trace(x) in window1) = (ref lectivity( ) in window1) ∗ (wavelet (w) in window1),
(trace(x) in window2) = (ref lectivity( ) in window2) ∗ (wavelet(w) in window2),
(trace(x) in window3) = (ref lectivity( ) in window3) ∗ (wavelet(w) in window3).
This representation of a seismic trace in a window by window convolutional model is called a Piece meal
convolutional model


3.7     Time Varying Convolutional Model

The primary idea of deconvolution in seismic data processing is to find the reflectivity function (strati-
graphic information as a function of depth) given a reflection seismogram. To carry out deconvolution,
 Section 3.8. Random Reflection Coefficient Model                                                Page 27

a signature-free convolutional model in the form of z = m ∗ ε, which is assumed to hold within each
window of the seismogram, is used, window B and C in figure 1.5. On a corresponding window, m is
the minimum-delay multiple wavelet and ε is a white-noise sample.


3.8    Random Reflection Coefficient Model

This model assumes that the convolutional model within each window on a reflection seismogram is in
form a stable feedback system, which requires an inverse operator (causal feedforward system, decon-
volution operator) to convert the observed seismogram into the desired reflectivity. The deconvolution
operator removes the multiple and leaves the reflectivity function behind. The method of predictive
deconvolution removes an estimate of the predictable minimum-delay component of the trace to yield
an estimate of the non-predictability “white component“ referred to as reflectivity. The basic procedure
involved in determining the deconvolution operator and to carry out deconvolution are as follows;

  1. Autocorrelation function of the portion of the seismic trace lying within the specified time window
     is computed.

  2. Computation of the coefficients of the prediction-error operator that correspond to this autocor-
     relation is carried out.

  3. The deconvolution of the trace is made by convolving the deconvolution operator with the trace.
     The output of deconvolution process is known as predictive error series, which approximates the
     reflectivity function within a specified time gate.
4. Application of the Convolutional Model
4.1     Water Reverberation

A water body on the surface of the earth behaves as an imperfect energy trap such that the water-air
interface acts as a Strong reflector (reflection coefficient -1) and water-bottom interface acts as a weak
reflector (of reflection coefficient ρ less than 1) figure 1.8(a). In other words, source wavelet energy
enters the water layers downward and transmitted energy goes through to the deep horizon, which reflect
it. The energised wavelet returns to the upward direction and it is again reflected by the surface water
layer, which results to a phenomenon called reverberation, owing to multiple reflections. Reverberation
blurrs the seismic trace to give the image of the subsurface.
Given that the two-way travel time for a wavelet energy to return to the surface of the water-layer
having been reflected at the bottom of the water surface is N , then a unit impulse travelling downwards
becomes size ρ travelling upwards when it had been reflected. Thus, the first secondary impulse has value
-ρ and travels downwards after it had been reflected by the water-air boundary at time N parameter as
shown in 1.8(a). Similarly, the second secondary impulse has value ρ2 at time 2N travelling downwards
into the water body.The process is continuously repeated and leads to a total result of the form
                       q = (1, 0, 0, . . . , 0, −ρ, 0, 0, . . . , 0, ρ2 , 0, 0, . . . , 0, −ρ3 , 0, 0, . . .)       (4.1)
for a one-pass minimum-delay reverberation train. It should be noted that there are N − 1 zeros
separating any two consecutive non-zero values, and N is known as cyclic time of the reverberation.
A one-pass dereverberation filter is the inverse of the one-pass reverberation train, that is, a one-pass
dereverberation filter is a minimum-delay wavelet.
                                                 q −1 = (1, 0, 0, . . . , 0, ρ).                                    (4.2)
However, in reality, a two-way seismic energy travels through the water body filter. The two-pass
reverberation train is the convolution of the one-pass reverberation train with itself. That is, the two-
pass reverberation train is a minimum-delay signal
                b = q ∗ q = (1, 0, 0, . . . , 0, −2ρ, 0, 0, . . . , 0, 3ρ2 , 0, 0, . . . , 0, −4ρ3 , 0, 0, . . .)   (4.3)
and its inverse is
                                   b−1 = a = (1, 0, 0, . . . , 0, 2ρ, 0, 0, . . . , 0, ρ2 ).                        (4.4)
The Convolution a ∗ b gives a unit spike, and when the inverse filter a is convolved with the idealised
trace, it eliminates the water-layer reverberations on the trace.


4.2     An Illustrative Example of Deconvolution of Non-minimum De-
        lay Wavelet and Trace Based on the Convolutional Model Using
        Canonical Representation of a Seismic trace or a Wavelet

From Figures 4.1(a) and 4.1(b), the digital data to use for illustration were obtained from the signal
varying continuously with time by sampling the continuous function at discrete time interval of 4 milisec-
onds or time index, n, where t = n∆t for n = 0, 1, 2, 3, . . . The Following data were obtained from the
seismic trace and seismic wavelet respectively.

                                                                28
                                                                                         Page 29
 Section 4.2. An Illustrative Example of Deconvolution of Non-minimum Delay Wavelet and Trace Based on the C




                  (a) A Seismic Trace                             (b) A Seismic Wavelet

                                        Figure 4.1: (a) and (b)

     Time index (n)                     0     1     2      3      4      5      6      6     7     9
     Amplitude at discrete time for     4.0   4.4   1.6    0.0    -2.0   -4.0   -1.6   0.0   1.6   2.0
     wavelet
     Amplitude at discrete time for     2.0   1.2   -1.0   -3.0   -2.0   -0.4   0.8    2.2   1.0   0.2
     trace

Table 4.1: Discretised amplitude values for a non-minimum delay seismic wavelet and non-minimum
delay seismic trace



An algorithm of a computer programme written in python to carry out the deconvolution of seismic
signal is described below. The programme can be accessed at [bod09]. A few lines of the code is
used to deconvolve the non-minimum delay wavelet and trace into their respective components of
normalised minimum-delay wavelet and a phase-shift filter and normalised minimum-delay trace and
spike-deconvolved trace.

  1. Autocorrelation table was computed and entries were added based on definition of autocorrelation
     of a wavelet to obtain rm−n in equation 2.10 of the normal equation.

  2. Advanced autocorrealtion values or cross-correlation of tail (predictive distance of one) with the
     wavelet values were set in vector form to get the gm values in the normal equation 2.10.

  3. The two quantities together with the a vector representing the prediction filter were set in the
     form of a system of normal equation in matrix form.

  4. The solutions obtained from the system of linear equations are constructed in the form of a
     predictive-error filter or spiking filter (Spiking F).

  5. The inverse of the prediction-error or spiking filter was then found, which is the unnormalised
     minimum wavelet of the non-minimum delay wavelet.

  6. Subsequently, the spiking filter was convolved with the trace or wavelet to obtained the unnor-
     malised all-pass filter.

  7. The unnormalised phase-shift filter (U.n all-pass) and unnormalised minimum-delay wavelet (U.n
     min WVL) or trace (U.n min trace) were normalised.
                                                                                         Page 30
 Section 4.2. An Illustrative Example of Deconvolution of Non-minimum Delay Wavelet and Trace Based on the C

It was discovered that the convolution of the normalised all-pass filter (N.all-pass(B) or trace) and the
normalised minimum-delay wavelet or trace (N.mini WVL (A) or trace) , and not the unnormalised all-
pass filter and the unnormalised minimum-delay wavelet or trace gives the non-minimum delay wavelet
or non-minimum trace. The tables and plots below show the results of the deconvolution processes.

 Time (n)    Spiking F   U.n min WVL      U.n all-pass   N.mini WVL (A)     N.all-pass(B)   Conv.of (A) and (B)
 0           1.000       1.000            4.000          4.577              0.874           4.000
 1           -0.094      0.938            0.653          4.292              0.143           4.404
 2           0.592       0.288            -0.150         1.318              -0.033          1.164
 3           -0.229      -0.057           0.194          -0.260             0.042           0.035
 4           0.552       -0.532           0.150          -2.433             0.033           -1.875
 5           -0.203      -0.705           -0.875         -3.231             -0.191          -3.840
 6           -0.063      -0.276           0.698          -1.267             0.153           -1.578
 7           0.176       0.011            -0.316         0.049              -0.070          -0.056
 8           -0.156      0.262            0.511          1.199              0.112           1.344
 9           0.263       0.292            -0.289         1.335              -0.063          1.700


Table 4.2: Amplitude values for spiking filter, unnormalised minimum wavelet, unnormalised all-pass
filter, normalised minimum wavelet, normalised all-pass filter and convolution of normalised minimum
wavelet and normalised all-pass filter.


 Time (n)    Spiking F   U.n min trace    U.n all-pass   N.mini trace (A)   N.all-pass(B)   Conv.of (A) and (B)
 0           1.000       1.000            2.000          2.715              0.737           2.000
 1           -0.806      0.811            -0.411         2.202              -0.152          1.211
 2           0.697       -0.055           -0.573         -0.148             -0.211          -1.015
 3           0.179       -0.788           -1.000         -2.138             -0.368          -3.018
 4           0.961       -0.826           0.126          -2.243             0.047           -1.982
 5           0.221       -0.389           -0.501         -1.055             -0.184          -0.330
 6           0.159       0.098            -0.320         0.266              -0.118          0.884
 7           -0.084      0.518            0.432          1.400              0.159           2.145
 8           0.508       0.267            -0.388         0.725              -0.143          0.923
 9           -0.313      0.0412           0.179          0.112              0.066           0.035


Table 4.3: Amplitude values for spiking filter, unnormalised minimum trace, unnormalised all-pass filter,
normalised minimum trace, normalised all-pass filter and convolution of normalised minimum trace and
normalised all-pass filter.
                                                                                        Page 31
Section 4.2. An Illustrative Example of Deconvolution of Non-minimum Delay Wavelet and Trace Based on the C




   (a) Spike deconvolved trace and spiking filter of trace (b) Spiking filter for wavelet and unnormalised
                                                          minimum-delay trace




   (c) Spike deconvolved wavelet and unnormalised (d) Normalised minimum-delay wavelet and Nor-
   minimum-delay wavelet                          malised minimum-delay trace

                                   Figure 4.2: (a), (b), (c) and (d)
 Section 4.3. Conclusion                                                                         Page 32

4.3     Conclusion

The reflectivity series is the input signal, which convolves with multiple wavelets to produce an output
in form of a seismic trace. The operation of convolution is reversible by spike-deconvolving a seismic
trace to recover the input in a process called seismic inversion. Deconvolution operation is produced
by the convolution of the deconvolution filter with the seismic trace. Predictive deconvolution is based
on minimum-delay and white criteria and includes gap deconvolution and spike deconvolution. From
the result of predictive deconvolution, the output signal is not the desired pure reflectivity series but a
reflectivity series that has been passed through an unknown and an unwanted all-pass filter. Thus, if
the source wavelet was close to a minimum-phase wavelet, the unknown all-pass filter will have little or
no effect on the desired pure reflectivity series. However, if the source wavelet was a nonminimum-delay
wavelet (mixed-delay wavelet), the output of predictive deconvolution will be a distorted pure reflectivity
series owing to the action of the white all-pass filter. Finally, the foregoing implies there is a need to
develop an improved convolutional model of a seismic trace, which enables the unwanted all-pass to be
known and to be eliminated by a method to be described in the model.
Appendix A.
A.1     Dual Sensor Wavelet Estimation

Seismic processing entails to a large extent the availability of a downgoing and an upgoing travelling
waves. However, a travelling wave is not measured directly, the sum of the particles velocity attribute
of the downgoing and upgoing waves is the signal recorded by a geophone. The sum of the pressure
attribute of the upgoing and the downgoing waves is the signal recorded by the hydrophone.
An event is said to be a traveling upgoing wave when the hydrophone and geophone signals provided by
the dual sensor are out of phase. On the other hand, if the geophone and hydrophone signals were in
phase, the event would be described as a travelling downgoing wave. The d’Alembert equation provides
a means of computing directly the travelling waves from the data recorded by the dual-sensor. In most
field work, the receiver is a dual sensor buried below the source of the seismic energy. The d’Alembert
equations convert particles-velocity signals and pressure signals into the downgoing and the upgoing
wave at the receivers location. In the convolutional model of a deep system of the subsurface layers,
the downgoing waves are the input signals and upgoing waves are the output signals [RT08].
The derivation of d’Alembert equations is as follow: Let V , p, z, ρ and v be the geophone record of a
particle-velocity trace, the hydrophone record of a pressure trace, an acoustic impedance of the medium,
density of medium, and a wave propagation velocity respectively. And let d, u, D, U be the downgoing
wave motion of the a pressure trace, the upgoing wave motion of the pressure trace, the downgoing
wave motion of the particle-velocity trace and the upgoing wave motion of the particle-velocity trace
respectively. Then, based on the Berkhout convention, we have

                                          V =D+U         and p = d + u.                           (A.1)
                         But d = zD and u = −zU            , where z= ρv                          (A.2)

Then, the solutions of the above set of equations yield the d’Alembert equations for the upgoing and
the downgoing particle-velocity wave motion and pressure wave motion respectively.
                                             p               p
                                         V +z           V −z
                                      D=        and U =                                           (A.3)
                                           2               2
                                      zV + p          −zV + p
                            also, d =         and u =         .                                   (A.4)
                                        2                2


A.2     Einstein or Predictive Deconvolution

Although deconvolution consist of two steps, namely, measurement or determination of the input wavelet
and deconvolving the output signal by the input wavelet to yield the unit-impulse response of the
earth layers. Predictive (spiking) deconvolution that requires the estimation of the input from the
knowledge of the output signal deconvolves a signal in a different approach when compared with signature
deconvolution in which the input wavelet is measured directly. However, with the use of a dual sensor,
the downgoing wave at the receiver location is used to deconvolve the upgoing wave at the receiver
location. This form of signature deconvolution in which both the output signal (upgoing wave at the
receiver location) and the input signal (downgoing wave at the receiver location) are determined by the
use of a dual sensor receiver is called the Einstein deconvolution.

                                                  33
 Section A.3. Tail Shaping and Head Shaping                                                         Page 34

 Similarity in methods
 Predictive                                              Einstein
 Reflection coefficient series is obtained as the de-       same
 convolved signal.
 Deconvolution process is carried out on the upgo-       same
 ing signal.
 The same deconvolution operator is used (inverse        same
 of the downgoing signal).
 Differences in Methods
 Predictive                                              Einstein
 The small white reflectivity and seismic minimum         Einstein deconvolution enjoys the advantage that
 phase wavelet hypothesis allows the predictive de-      it is not based on the small white reflectivity hy-
 convolution operator to be computed by least            pothesis.
 squares from the upgoing signature.
 The final dynamic deconvolution process is not           Einstein deconvolution which is more general in-
 necessary owing to the small white reflectivity hy-      volves the dynamic deconvolution.
 pothesis.
 It is robust and stable in the presence of noise, and   Einstein deconvolution is more sensitive to noise.
 has the advantage of many years of successful use.

                    Table A.1: Comparison of Einstein and Predictive deconvolution



The difference between the Einstein deconvolution and predictive deconvolution is in the first step of the
way the input signal is computed. Predictive deconvolution proves very useful at instances in which the
input signal cannot be measured directly. In other words, the input signal must be determined indirectly.
In Einstein deconvolution, the deconvolved signal is an estimate of the reflection response of the deep
system (earth below the receiver) where a dual geophone and hydrophone records both the downgoing
input signal and the upgoing output signal.
Basically, Einstein deconvolution requires two steps. Firstly, the use of d’Alembert equations to transform
the particle-velocity signal and the presssure signal into the downgoing wave and upgoing wave at the
receiver location. And secondly, the deconvolution of the upgoing wave by the downgoing wave. Then,
the unit impulse reflection response of the rock layer below at the receiver is the desired input required
for the dynamic deconvolution, and output of the dynamic deconvolution is the reflection coefficient
series of interfaces reflection coefficients located below the receiver.
Although the two methods need to be used together to yield the best result, the following comparison
can be drawn between the Einstein deconvolution and predictive deconvolution.


A.3     Tail Shaping and Head Shaping

A filter with the minimum-phase wavelet b = (b0 , b1 , b2 , . . .) as an input for which a tail is the desired
output, t, is known as a Tail-shaping filter. While that in which a head is desired as an output is
referred to as a Head-shaping filter. The cross-correlation between the desired output and input in the
 Section A.3. Tail Shaping and Head Shaping                                                                         Page 35

case where tail is the desired output is
                                           gn =           bα+n+1 bi = rn+α ,                                          (A.5)
                                                     i
showing that cross-correlation is equal to the advanced autocorrelation of the minimum-phase wavelet.
That is, the normal equations 3.8 is equally applicable in this case.
On the other hand, if the head, h = (b0 , b1 , b2 , . . . , bα−1 ), is desired as an output, the cross- correlation
between the desired output and input is the autocorrelation ρ of the head; defined as
                                                     ρ0 = b2 + b2 + . . . + b2
                                                           0    1            α                                        (A.6)
                                 ρ1 = b1 b0 + b2 b1 + . . . + bα bα−1 , . . . ,                                       (A.7)
                                                                       ρα = bα b0 .                                   (A.8)
                                                                                                                      (A.9)
And the right hand side of the normal equations 3.8 is
                                   g = (ρ0 , ρ1 , ρ2 , . . . , ρα , 0, 0, . . . , 0),                                (A.10)
with the head shaping filter of the form

                                                                                                           
        r0        r1      ...   rα−1      rα       rα+1      ...     rα+N −1               f0              ρ0
  
       r1        r0      ...   rα−2     rα−1       rα       ...     rα+N −2             f1           ρ1
         .         .       .      .        .         .        .         .
                                                                              
                                                                                            .
                                                                                                   
                                                                                                            .
                                                                                                          
  
        .
         .         .
                   .       .
                           .      .
                                  .        .
                                           .         .
                                                     .        .
                                                              .         .
                                                                        .
                                                                              
                                                                                          .
                                                                                            .
                                                                                                   
                                                                                                   
                                                                                                          
                                                                                                           .
                                                                                                            .
                                                                                                     
   rα−1    rα−2          ...    r0        r1        r2      ...      rN +1  
                                                                             .          f α − 1  ρα−1 
                                                                                                    0  . (A.11)
                                                                                                 =     
   rα      rα−1          ...    r1        r0        r1      ...       rN                 fα
                                                                                                     
   rα+1     rα           ...    r2        r1        r0      ...      rN −1             fα + 1   0 
                                                                                                     
      .
      .       .
              .            .
                           .      .
                                  .         .
                                            .         .
                                                      .       .
                                                              .         .
                                                                        .                    .
                                                                                             .      . 
                                                                                                    . 
                                                                             
     .       .            .      .         .         .       .         .                  .         .
   rα+N −1 rα+N −2        . . . rN +1     rN       rN −1 . . .           r0             fα + N − 1     0

The following examples explicitly compare the filters.
Let the minimum-phase wavelet be b = (1, −0.6, 0.3, −0.1) and the length of the prediction filter be
N = 4. For a prediction distance of 2, α = 2, the tail of the wavelet is t = (0.3, −0.1). The one-sided
minimum-phase autocorrelation is r = (1.46, −0.81, 0.36, −0.1). The cross-correlation of the tail with
the wavelet, which is the same as the advanced autocorrelation of the wavelet is g = (0.36, −0.1, 0.0).
The normal equations are
                                                                                  
                 1.4600 −0.8100 0.3600 −0.1000                 0.2998          0.3600
               −0.8100 1.4600 −0.8100 0.3600   0.0801  −0.1000
                0.3600 −0.8100 1.4600 −0.8100 . −0.0420 =  0.0000  ,                      (A.12)
                                                                                  

                −0.1000 0.3600 −0.8100 1.4600                  0.0225          0.0000
and the solution of equation A.12 produces the column vector on the left as the predictive operator.
The equation for the prediction-error operator is
    1.4600 −0.8100 0.3600            −0.1
                                                                                      
                                               0.0000 0.0000       1.0000          1.3600
 −0.8100 1.4600 −0.8100 0.3600 −0.1000 0.0000   0.0000  −0.6002
                                                                                      
  0.3600 −0.8100 1.4600 −0.8100 0.3600 −0.1000 −0.2998  0.0000 
 −0.1000 0.3600 −0.8100 1.4600 −0.8100 0.3600  −0.0801  0.0000  (A.13)
                                                             .          =             
                                                                                      
  0.0000 −0.1000 0.3600 −0.8100 1.4600 −0.8100  0.0420   0.0000 
      0.0000    0.0000     −0.1000       0.3600          −0.8100        1.4600           −0.0225       0.0000
 Section A.3. Tail Shaping and Head Shaping                                                         Page 36

where the column vector on the left is the prediction-error operator

                          f = (1, 0, −0.2989, −0.08012, 0.04197, −0.0225)                            (A.14)

as given by the solution of the equations A.13.The cross-correlation of the head with the wavelet, which
is the same as the autocorrelation of the head is ρ = (1.36, −0.6, 0, 0, 0, 0)
For the head shaping operator,


   1.4600 −0.8100 0.3600 −0.1000 0.0000
                                                                
                                        0.0000     1.0000     1.3600
 −0.8100 1.4600 −0.8100 0.3600 −0.1000 0.0000   0.0002  −0.6000
                                                                
  0.3600 −0.8100 1.4600 −0.8100 0.3600 −0.1000 −0.2997  0.0000 
 −0.1000 0.3600 −0.8100 1.4600 −0.8100 0.3600   0.0801   0.0000  (A.15)
                                              .        =        
                                                                
  0.0000 −0.1000 0.3600 −0.8100 1.4600 −0.8100  0.0420   0.0000 
   0.0000 0.0000 −0.1000 0.3600 −0.8100 1.4600     0.0225     0.0000

where the column vector on the left is the head-shaping operator

                            f = (1, 0.0002, −0.2997, 0.0801, 0.0420, 0.0225).                        (A.16)

Comparing the solutions of the equations, filters A.14 and A.16 are the same within the limits of the
numerical approximations used.
Equally, using spike deconvolution and gap deconvolution filter f = a ∗ h, 3.28, because the spike filter
a is a necessary minimum phase, it follows that the gap filter f is a minimum phase if, and only if, the
head h is a minimum phase. Assuming that the length of the prediction filter be N = 4, the prediction
distance, α = 1, and the tail of the wavelet b = (1, −0.6, 0.3, −0.1) is t = (−0.6, 0.3, −0.1). The
cross-correlation of the tail with the wavelet, which is the same as the advanced autocorrelation of the
wavelet, is g = (0.81, 0.36, −0.1, 0). Then the normal equations are
                                                                                  
            1.46000 −0.81000 0.36000 −0.10000                  −0.59980       −0.81000
         −0.81000 1.46000 −0.81000 0.36000  −0.06096 −0.36000
          0.36000 −0.81000 1.46000 −0.81000 .  0.04497  = −0.10000                         (A.17)
                                                                                  

           −0.10000 0.36000 −0.81000 1.46000                   −0.00110        0.00000

where the column vector on the left is the predictive operator as given by the solution of the equation
A.17. In this way, the spike operator is

     a = (1, −k0 , −k1 , −k2 , . . . , −kN −1 ) = (1.00000, 0.59980, 0.06096, −0.04497, 0.00110),    (A.18)

and the convolution of the spike operator with the head h = (1, −0.6) for α = 2 gives the prediction-error
operator as


                      f = (1, −0.00022, −0.29890, −0.08154, 0.02808, −0.00067).                      (A.19)

In summary, the prediction-error operator has been calculated in three ways. And for a prediction-error
of a given length, the head shaping operator given by equation A.16 produces the most accurate result,
while equation A.19 obtained by convolution of the spike filter with head gives the least accurate result.
 Section A.4. Gap Deconvolution of a Mixed-delay Wavelet                                                           Page 37

A.4     Gap Deconvolution of a Mixed-delay Wavelet

Usually, when a seismic trace had been processed as such that impulses or effects like a source-signature,
the absorption and instrument response are removed, a non-minimum-delay residual wavelet u remains
in the resulting trace. Thus, the seismic trace model becomes x = u ∗ m ∗ ε, where m and ε are
minimum-delay multiple and the white reflectivity respectively. Suppose

                                   m = b = (1, 0, −0.5, 0, 0.25, 0, −0.125, 0, . . .)                               (A.20)

then the seismic wavelet w is the convolution of the residual wavelet and the minimum-delay reverber-
ation train, i.e. seismic trace is x = w ∗ ε. If the residual wavelet is defined by the maximum-delay
wavelet, u = (1, 2), then the seismic wavelet is the mixed-delay wavelet

                          w = u ∗ b = (1, 2, −0.5, −1, 0.25, 0.5, −0.125, −0.25, . . .).                            (A.21)

The prediction operator (for prediction distance 2) is k = (−0.5, 0, 0, . . .), which is verified by carrying
out the convolution
                              k ∗ w = (−1, 0.25, 0.5, −0.125, −0.25, . . .)                          (A.22)
which is the non-negative part of the advanced seismic wavelet (tail shaping prediction filter). Prediction-
error operator f = (1, 0, 0.5, 0, 0, . . .) which is the dereveberation operator, and when, f , is convolved
with, w, we have f ∗ w = (1, 2, 0, 0, 0, . . .), which is the residual wavelet u. Deconvolving the seismic
trace by this prediction-error operator, we have f ∗ x = f ∗ w ∗ ε = u ∗ ε. This implies the deconvolved
trace is the reflectivity smoothed by the residual wavelet. The above case shows that gap deconvolution
is suitable for removal of reverberation in a non-minimum-delay wavelet [RT08].
For example, assuming that water depth is 100ft (30.482m), the water surface reflection and water
bottom coefficient are -1 and 0.5 respectively, velocity of water be 5000ft/2 (1524.390m/s), then, the
                           30.482
two-way travel-time is 2 1524.390 = 0.04s = 40ms. If the sample increment is 4ms, then the travel-
time is a discrete unit which is 40 = 10. Figure 1.8(b) is the plot equation A.23 of the reverberation
                                  4
coefficient resulting from the equation 4.3 when the reflection coefficients are substituted. That is

          b = (1, 0, 0, . . . , 0, −2(0.5), 0, 0, . . . , 0, 3(0.5)2 , 0, 0, . . . , 0, −4(0.5)3 , 0, 0, . . .).    (A.23)

A way to test the accuracy of the least square deconvolution is to compare the prediction-error operator
with the deconvolution operator or deconvolved trace with smoothed reflectivity. The closer the values
are, the better the deconvolution. The residual wavelet appears in the deconvolved trace because its
length is less than the cycle time, thus, it falls through the cracks of the prediction-error filter.


A.5     Prewhitening

Pre-whitening is a means of boosting the value of the zero-lag coefficient of a truncated signal with a
view to eliminating the power spectrum of negative values, Figure ?? (appendix), [RT08]. A truncated
signal (tail of the signal cut-off) has some values of autocorrelation removed owing to its tail being
cut-off. This results in power spectrum of the truncated signals having negative values. Alternatively,
the truncated autocorrelation function could be modified to become valid by making it to damp out
more rapidly in a process called tapering. Graphs in Figure A.2 (appendix) [RT08] are improved forms
of graphs in Figure A.3 (appendix) [RT08] due to pre-whitening.
 Section A.6. Convolutional Model in the Frequency Domain                                        Page 38

A.6     Convolutional Model in the Frequency Domain

Modelling of components of the convolution of seismic trace in a time domain could be done in a
frequency domain as well [RT08]. Figure A.4(appendix), [RT08] shows the component of the convolution
model of a seismic trace, namely, a minimum-phase wavelet and a white reflectivity. Convolution in
a time domain is equivalent to the multiplication in a frequency domain. The frequency spectrum of
the trace is the product of the frequency spectrum of the seismic wavelet and frequency spectrum of
reflectivity. In the same way, energy spectrum of the trace equals the product of the energy spectrum
of the seismic wavelet and the energy spectrum of reflectivity. The similarity, as noticed in the shape
of energy spectrum of wavelet and power spectrum of trace, is due to the near flatness of the power
spectrum of reflectivity. Similarly, the rapid fluctuation seen in the energy spectrum of the trace results
from high frequencies present in the reflectivity component, and the smooth and lower component could
be related to the shape of the wavelet.
Considering the wavelet, reflectivity and the trace autocorrelation, Figure A.5 (appendix), [RT08], the
trace autocorrelation is similar to that of wavelet autocorrelation owing to the fact that autocorrelation
of reflectivity is of a small magnitude at all lags except for lag zero.




Figure A.1: (a) A true autocorrelation has a positive spectrum. (b) The truncated autocorrelation,
however, has a spectrum with negative values. (c) If the truncated autocorrelation is tapered, the
resulting spectrum is positive. (d) If the zero-lag value of the truncated autocorrelation is given a 50%
boost, the resulting power spectrum is almost positive. [RT08]
 Section A.6. Convolutional Model in the Frequency Domain                                    Page 39




Figure A.2: A minimum-phase wavelet and its inverse wavelet. (b) The autocorrelations of the wavelets
shown in (a). (c) The amplitude spectra of the wavelets shown in (a).[RT08]
 Section A.6. Convolutional Model in the Frequency Domain                                     Page 40




Figure A.3: (a) A prewhitened version of the minimum-phase wavelet shown in Figure 11a and the
inverse of this prewhitened version. (b) The autocorrelations of the prewhitened wavelets shown in (a).
Note that the prewhitening was done by increasing the zero-lag value of the wavelet autocorrealation
shown in Figure 11b to obtain the autocorrelation of this wavelet shown here. (c) The amplitude spectra
of the wavelets shown in (a). Note that the amplitude spectrum of the inverse shown here has better
characteristics than does its counterpart in Figure 11c. [RT08]
 Section A.6. Convolutional Model in the Frequency Domain                                     Page 41




Figure A.4: (a) The two components of the convolutional model: a minimum-phase wavelet and a white
reflectivity. (b) The trace obtained by the convolution of the minimum-phase wavelet and the white
reflectivity in (a) and the spike -deconvolved trace. Compare the spike-deconvolved trace with the white
reflectivity shown in (a).[RT08]
 Section A.6. Convolutional Model in the Frequency Domain                                       Page 42




Figure A.5: (c) The autocorrelation of the minimum-phase wavelet and the autocorrelation of the white
reflectivity shown in (a). (d) The autocorrelation of the trace and the autocorrelation of the deconvolved
trace shown in (b). (e) The energy spectrum of the minimum-phase wavelet shown in (a) and the power
spectrum of the white reflectivity shown in (a). (f) The power spectra of the original trace shown on
the left in (b) and of the deconvolved trace shown on the right in (b). [RT08]
Acknowledgements
It is with an in-depth gratitude to, God, the father of lights who blessed me with good gift of wisdom
and understanding, with whom there is neither variation nor shadow of turning, James 117 , that I write
to appreciate a number of people who made my stay at AIMS a memorable one. I wish to appreciate my
amiable supervisor, Professor George Smith of the university of CapeTown, South Africa, who played
a remarkable role in instructing me on this work and for his pieces of advise in general. My special
thanks goes to the director of AIMS, Professor Hahne Fritz, AIMS IT-Manager, Mr Jan Groenewald,
AIMS English Language instructor, Frances Aaro, the team of tutors and memebers of staff at AIMS.
More specially, this essay will not have been a success without the assistance of Mihaja, my tutor for
this essay, the welfare manager Igsaan who is quite caring, and my fellow colleagues at AIMS who have
been wonderful friends.
Furthermore, I wish to thank my parents and siblings who have always been there in all ways. The
efforts of the founder and chair of AIMS Council, Professor Niel Turok, University of Cambridge, UK,
is equally recognised. In all, I wish to say thank you Jesus.




                                                  43
References
[bod09] bode@aims.ac.za, A computer code to carry out deconvolution of seismic waves,
        http://users.aims.ac.za/˜bode.

[Dra97] Nikos Drakos, http://sepwww.stanford.edu/sep/prof/fgdp/c1/paper html/node2.html#section00110
        %000000000000000 (ed.)., LaTeX2HTML translator Version 97.1 (release) (July 13th, 1997),
        and Hector Urdaneta on 10/30/1997, October 1997, Stanford Exploration Project 10/30/1997.

[PBL72] Alvin    L.    Parrack,      Bellaire,     and   Delbert   R.   Lunsford, Method
        for     enhancing      seismic       data,     United    States   Patent  (1972),
        http://www.google.co.za/patents?id=hVwUAAAAEBAJ&printsec=abstract&zoom=4&dq=digital+
        sampling+of +seismic+trace+for+deconvolution#PPA2,M1.

[RT08]   Enders A. Robinson and Sven Treitel (eds.), Digital imaging and deconvolution: The abcs of
         seismic exploration and processing, Geophysical References Series, no. 15, Society of Explo-
         ration Geophysicists, P.O. Box 702740, Tulsa, OK 74170-2740, 2008.

[SR79]   Manuel T. Silvia and Enders A. Robinson (eds.), Deconvolution of geophysical time series in
         the exploration for oil and natural gas, no. 10, Elsevier Scientific Publishing Company, Elsevier
         Scientific Publishing Company, 335 Jan van Galenstraat, P.O. Box 211, Amsterdam, The
         Netherlands, 1979.




                                                  44

				
DOCUMENT INFO