Seismic Data Processing and Interpretation by ahnhafiz

VIEWS: 148 PAGES: 15

									              Seismic Data Processing and Interpretation

1. Introduction

The purpose of seismic processing is to manipulate the acquired data into an image that
can be used to infer the sub-surface structure. Only minimal processing would be
required if we had a perfect acquisition system. Processing consists of the application of a
series of computer routines to the acquired data guided by the hand of the processing
geophysicist. The interpreter should be involved at all stages to check that processing
decisions do not radically alter the interpretability of the results in a detrimental manner.
Processing routines generally fall into one of the following categories:

* enhancing signal at the expense of noise
* providing velocity information
* collapsing diffractions and placing dipping events in their true subsurface locations
* increasing resolution.

2. Processing Steps

There are number of steps involved from seismic data acquisition to interpretation of
subsurface structure. Some of the common steps are summarized below:

              Acquisition              Static Correction
              Processing               Velocity Analysis
                                       (Time/Depth, Kirchhof’s, f-k domain )
              Interpretation           Seismic data to subsurface geology

In order to work with above steps (or to work more with seismic data), a number of signal
processing operations are needed to accomplish the job. Some of them are: i) Sampling
data, ii) Mute, iii) Amplitude recovery/ corrections, iv) Filtering, v) Deconvolution,
v) f-k analysis etc. Some signal processing tools are explained in section three.

2.1 Data acquisition

Shot gather:

Multiple shotpoints:

If more than one shot location is used, reflections arising from the same point on the
interface will be detected at different geophones. The common point of reflection is
known as the common midpoint


CMP gather:

The CMP gather lies in the heart of seismic processing for two main reasons:
i) the variation of travel time with offset, the moveout will depend only on the velocity of
the subsurface layers (horizontal uniform layers) and the subsurface velocity can be
derived. ii) The reflected seismic energy is usually very weak. It is imperative to increase
the signal to noise ratio of most data.

2.1.1 Static Correction

Static corrections are applied to seismic data to compensate for the effects of variations in
elevation, weathering thickness, weathering velocity, or reference to a datum. The
objective is tio determine the reflection arrival times which would have been observed in
all measurements had been made on a flat plane with no weathering or low velocity
material present. These corrections are base don uplole data, refraction first break or
event smoothing.

2.2.1 Stacking (Velocity Analysis, NMO/DMO)

This section is an approach of iterative process of applying NMO, DMO and standard
velocity analysis. DMO improves the quality of the stack and the usefulness of the
stacking velocity field. A variety of methods are available (constant velocity stacks,
constant velocity gathers, semblance) which work to different extents with different data
types. NMO and DMO are used the final velocity field after convergence.

2.2.2 Migration

Migration will lead to the final product, either as depth or time. We can apply migration
using velocities based on our velocity analysis if they are good enough, by testing a range
of different velocities to determine which collapse diffractions correctly, or by using
other information. Care is required to produce a generally smooth velocity field. A
seismic section before and after migration is shown below for example.

2.2.3 Interpretation

This is the final section, one can say finished product of the seismic processing steps.
Subsurface geologies are generally derived from this unit.

3. Signal Processing Tools

3.1 Basic Fourier Theory

The Fourier series expansion of f is:

where, for any non-negative integer n:

                is the nth harmonic (in radians) of the function f,

                                         are the even Fourier coefficients of f, and

                                         are the odd Fourier coefficients of f.
Equivalently, in exponential form,


i is the imaginary unit, and
                                          in accordance with Euler's formula.

Example 1: Simple Fourier series

Let f be periodic of period 2π, with f(x) = x for x from −π to π. Note that this function is a
periodic version of the identity function.

Plot of a periodic identity function - a sawtooth wave.

Animated plot of the first five successive partial Fourier series
We will compute the Fourier coefficients for this function.

Notice that an are 0 because the                      are odd functions. Hence the Fourier
series for this function is:

3.2 Fouriere Transform

In mathematics, the Fourier transform is a certain linear operator that maps functions to
other functions. Loosely speaking, the Fourier transform decomposes a function into a
continuous spectrum of its frequency components, and the inverse transform synthesizes a
function from its spectrum of frequency components. A useful analogy is the relationship
between a series of pure notes (the frequency components) and a musical chord (the
function itself). In mathematical physics, the Fourier transform of a signal          can be
thought of as that signal in the "frequency domain." This is similar to the basic idea of the
various other Fourier transforms including the Fourier series of a periodic function.
(See also fractional Fourier transform and linear canonical transform for

Suppose is a complex-valued Lebesgue integrable function. The Fourier transform to
the frequency domain, , is given by the function:

                                        , for every real number
When the independent variable t represents time (with SI unit of seconds), the transform
variable ω represents angular frequency (in radians per second).
Other notations for this same function are:         and                      . The function is
complex-valued in general. ( represents the imaginary unit.)
If         is defined as above, and                 is sufficiently smooth, then it can be
reconstructed by the inverse transform:

                                             , for every real number
The interpretation of               is aided by expressing it in polar coordinate form,
                             , where:
                       the amplitude
                    the phase
Then the inverse transform can be written:

which is a recombination of all the frequency components of            . Each component is a
complex sinusoid of the form            whose amplitude is proportional to          and whose
initial phase angle (at t = 0) is        .

3.2.1 Discrete Fourier transform

In mathematics, the discrete Fourier transform (DFT), sometimes called the finite Fourier
transform, is a Fourier transform widely employed in signal processing and related fields
to analyze the frequencies contained in a sampled signal, to solve partial differential
equations, and to perform other operations such as convolutions. The DFT can be
computed efficiently in practice using a fast Fourier transform (FFT) algorithm.
The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N
complex numbers X0, ..., XN−1 by the DFT according to the formula:

where e is the base of the natural logarithm, is the imaginary unit (i2 = − 1), and π is Pi.
The transform is sometimes denoted by the symbol , as in                       or      .
The inverse discrete Fourier transform (IDFT) is given by

Note that the normalization factor multiplying the DFT and IDFT (here 1 and 1/N) and
the signs of the exponents are merely conventions, and differ in some treatments. The
only requirements of these conventions are that the DFT and IDFT have opposite-sign
exponents and that the product of their normalization factors be 1/N. A normalization of
          for both the DFT and IDFT makes the transforms unitary, which has some
theoretical advantages, but it is often more practical in numerical computation to perform
the scaling all at once as above (and a unit scaling can be convenient in other ways).
(The convention of a negative sign in the exponent is often convenient because it means
that Xk is the amplitude of a "positive frequency" 2πk / N. Equivalently, the DFT is often
thought of as a matched filter: when looking for a frequency of +1, one correlates the
incoming signal with a frequency of −1.)
In the following discussion the terms "sequence" and "vector" will be considered

3.2.2 Z-transform
In mathematics and signal processing, the Z-transform converts a discrete time domain
signal, which is a sequence of real numbers, into a complex frequency domain
The Z-transform and advanced Z-transform were introduced (under the Z-transform
name) by E. I. Jury in 1958 in Sampled-Data Control Systems (John Wiley & Sons). The
idea contained within the Z-transform was previously known as the "generating function
The (unilateral) Z-transform is to discrete time domain signals what the one-sided
Laplace transform is to continuous time domain signals.
The Z-transform, like many other integral transforms, can be defined as either a one-
sided or two-sided transform.
Bilateral Z-Transform
The bilateral or two-sided Z-transform of a discrete-time signal x[n] is the function X(z)
defined as

where n is an integer and z is, in general, a complex number:
z = Aejφ
where A is the magnitude of z, and φ is the angular frequency (in radians per sample).

3.3 Convolution
In mathematics and, in particular, functional analysis, convolution is a mathematical
operator which takes two functions f and g and produces a third function that in a sense
represents the amount of overlap between f and a reversed and translated version of g. A
convolution is a kind of very general moving average, as one can see by taking one of
the functions to be an indicator function of an interval.

The convolution of and is written             . It is defined as the integral of the product
of the two functions after one is reversed and shifted. As such, it is a particular kind of
integral transform:

By change of variables, replacing       by        , it is sometimes written as:

The integration range depends on the domain on which the functions are defined. While
the symbol is used above, it need not represent the time domain. In the case of a finite
integration range, and are often considered to extend periodically in both directions,
so that the term           does not imply a range violation. This use of periodic domains
is sometimes called a cyclic, circular or periodic convolution. Of course, extension with
zeros is also possible. Using zero-extended or infinite domains is sometimes called a
linear convolution, especially in the discrete case below.
If and are two independent random variables with probability distributions and ,
respectively, then the probability distribution of the sum          is given by the
convolution         .
For discrete functions, one can use a discrete version of the convolution. It is given by

When multiplying two polynomials, the coefficients of the product are given by the
convolution of the original coefficient sequences, in this sense (using extension with
zeros as mentioned above).
Generalizing the above cases, the convolution can be defined for any two integrable
functions defined on a locally compact topological group (see convolutions on groups
A different generalization is the convolution of distributions.



Associativity with scalar multiplication

for any real (or complex) number        .
Differentiation rule

where        denotes the derivative of f or, in the discrete case, the difference operator

Convolution theorem
The convolution theorem states that

where          denotes the Fourier transform of . Versions of this theorem also hold for
the Laplace transform, two-sided Laplace transform and Mellin transform.
See also less trivial Titchmarsh convolution theorem.

Convolution and related operations are found in many applications of engineering and
1. In statistics, as noted above, a weighted moving average is a convolution.
also the probability distribution of the sum of two independent random variables is the
convolution of each of their distributions.
2. In optics, many kinds of "blur" are described by convolutions. A shadow (e.g. the
shadow on the table when you hold your hand between the table and a light source) is the
convolution of the shape of the light source that is casting the shadow and the object
whose shadow is being cast. An out-of-focus photograph is the convolution of the sharp
image with the shape of the iris diaphragm. The photographic term for this is bokeh.
3. Similarly, in digital image processing, convolutional filtering plays an important role
in many important algorithms in edge detection and related processes.
4. In linear acoustics, an echo is the convolution of the original sound with a function
representing the various objects that are reflecting it.
4. In artificial reverberation (digital signal processing, pro audio), convolution is used to
map the impulse response of a real room on a digital audio signal (see previous and next
point for additional information).
5. In electrical engineering and other disciplines, the output (response) of a (stationary, or
time- or space-invariant) linear system is the convolution of the input (excitation) with
the system's response to an impulse or Dirac delta function. See LTI system theory and
digital signal processing.
6. In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a
chain of delta pulses, and the measured fluorescence is a sum of exponential decays from
each delta pulse.
7. In physics, wherever there is a linear system with a "superposition principle", a
convolution operation makes an appearance.
This is the fundamental problem term in the Navier Stokes Equations relating to the Clay
Institute of Mathematics Millennium Problem and the associated million dollar prize.

3.4 Cross-correlation
In statistics, the term cross-correlation is sometimes used to refer to the covariance
cov(X, Y) between two random vectors X and Y, in order to distinguish that concept from
the "covariance" of a random vector X, which is understood to be the matrix of
covariances between the scalar components of X.
In signal processing, the cross-correlation (or sometimes "cross-covariance") is a measure
of similarity of two signals, commonly used to find features in an unknown signal by
comparing it to a known one. It is a function of the relative time between the signals, is

sometimes called the sliding dot product, and has applications in pattern recognition and
For discrete functions fi and gi the cross-correlation is defined as

where the sum is over the appropriate values of the integer j and a superscript asterisk
indicates the complex conjugate. For continuous functions f (x) and g (x) the cross-
correlation is defined as

where the integral is over the appropriate values of t.
The cross-correlation is similar in nature to the convolution of two functions. Whereas
convolution involves reversing a signal, then shifting it and multiplying by another
signal, correlation only involves shifting it and multiplying (no reversing).
In an Autocorrelation, which is the cross-correlation of a signal with itself, there will
always be a peak at a lag of zero.
If X and Y are two independent random variables with probability distributions f and g,
respectively, then the probability distribution of the difference − X + Y is given by the
cross-correlation f g. In contrast, the convolution f * g gives the probability distribution
of the sum X + Y

3.5 Autocorrelation

A plot showing 100 random numbers with a "hidden" sine function, and an
autocorrelation of the series on the bottom.
Autocorrelation is a mathematical tool used frequently in signal processing for analysing
functions or series of values, such as time domain signals. Informally, it is a measure of

how well a signal matches a time-shifted version of itself, as a function of the amount of
time shift. More precisely, it is the cross-correlation of a signal with itself.
Autocorrelation is useful for finding repeating patterns in a signal, such as determining
the presence of a periodic signal which has been buried under noise, or identifying the
missing fundamental frequency in a signal implied by its harmonic frequencies.

Given a signal f(t), the continuous autocorrelation Rff(τ) is most often defined as the
continuous cross-correlation integral of f(t) with itself, at lag τ.

where     represents the complex conjugate and * represents convolution. For a real
function,        .
The discrete autocorrelation R at lag j for a discrete signal xn is

The above definitions work for signals that are square integrable, or square summable,
that is, of finite energy. Signals that "last forever" are treated instead as random
processes, in which case different definitions are needed, based on expected values. For
wide-sense-stationary random processes, the autocorrelations are defined as:

For processes that are not stationary, these will also be functions of t, or n.
For processes that are also ergodic, the expectation can be replaced by the limit of a time
average. The autocorrelation of an ergodic process is sometimes defined as or equated to

These definitions have the advantage that they give sensible well-defined single-
parameter results for periodic functions, even when those functions are not the output of
stationary ergodic processes.


1. Hatton, L., Worthington, M. H. and Makin, J., Seismic Data Processing Theory and
Practice. Blackwell Scientific Publications, ISBN 0-632-01374-5, 1986.

2. Bracewell, R. N, The Fourier Transform and its Applications. McGraw-Hill, New
York, 1983.

3. Telford, W.M., Gelbert, L.P., Sherff, R.E. and Keys, D.A., Applied Geophysics.
Cambridge University Press, 1990.

4. Yilmaz, O., Seismic Data Processing. Society of Exploration Geophysicists, USA,

                     N.B. for more information please contact to:

                                Dr. S. M. Rahman
                 Dept. of Applied Physics & Electronic Engineering
                        University of Rajshahi, Bangladesh.


To top