Measuring and locating earthquakes

Document Sample
Measuring and locating earthquakes Powered By Docstoc
					Measuring and locating earthquakes
Main article: Seismology

Earthquakes can be recorded by seismometers up to great distances, because seismic waves
travel through the whole Earth's interior. The absolute magnitude of a quake is conventionally
reported by numbers on the Moment magnitude scale (formerly Richter scale, magnitude 7
causing serious damage over large areas), whereas the felt magnitude is reported using the
modified Mercalli scale (intensity II-XII).

Every tremor produces different types of seismic waves which travel through rock with
different velocities: the longitudinal P-waves (shock- or pressure waves), the transverse S-
waves (both body waves) and several surface waves (Rayleigh and Love waves). The
propagation velocity of the seismic waves ranges from approx. 3 km/s up to 13 km/s,
depending on the density and elasticity of the medium. In the Earth's interior the shock- or P
waves travel much faster than the S waves (approx. relation 1.7 : 1). The differences in travel
time from the epicentre to the observatory are a measure of the distance and can be used to
image both sources of quakes and structures within the Earth. Also the depth of the
hypocenter can be computed roughly.

In solid rock P-waves travel at about 6 to 7 km per second; the velocity increases within the
deep mantle to ~13 km/s. The velocity of S-waves ranges from 2–3 km/s in light sediments
and 4–5 km/s in the Earth's crust up to 7 km/s in the deep mantle. As a consequence, the first
waves of a distant earth quake arrive at an observatory via the Earth's mantle.

Rule of thumb: On the average, the kilometer distance to the earthquake is the number of
seconds between the P and S wave times 8 [1]. Slight deviations are caused by
inhomogenities of subsurface structure. By such analyses of seismograms the Earth's core was
located in 1913 by Beno Gutenberg.




Measuring the Size of an Earthquake
   Earthquakes range broadly in size. A rock-burst in an Idaho silver mine may involve the
fracture of 1 meter of rock; the 1965 Rat Island earthquake in the Aleutian arc involved a 650
kilometer length of the Earth's crust. Earthquakes can be even smaller and even larger. If an
earthquake is felt or causes perceptible surface damage, then its intensity of shaking can be
subjectively estimated. But many large earthquakes occur in oceanic areas or at great focal
depths and are either simply not felt or their felt pattern does not really indicate their true size.

   Today, state of the art seismic systems transmit data from the seismograph via telephone
line and satellite directly to a central digital computer. A preliminary location, depth-of-focus,
and magnitude can now be obtained within minutes of the onset of an earthquake. The only
limiting factor is how long the seismic waves take to travel from the epicenter to the stations -
usually less than 10 minutes.

Magnitude
   Modern seismographic systems precisely amplify and record ground motion (typically at
periods of between 0.1 and 100 seconds) as a function of time. This amplification and
recording as a function of time is the source of instrumental amplitude and arrival-time data
on near and distant earthquakes. Although similar seismographs have existed since the 1890's,
it was only in the 1930's that Charles F. Richter, a California seismologist, introduced the
concept of earthquake magnitude. His original definition held only for California earthquakes
occurring within 600 km of a particular type of seismograph (the Woods-Anderson torsion
instrument). His basic idea was quite simple: by knowing the distance from a seismograph to
an earthquake and observing the maximum signal amplitude recorded on the seismograph, an
empirical quantitative ranking of the earthquake's inherent size or strength could be made.
Most California earthquakes occur within the top 16 km of the crust; to a first approximation,
corrections for variations in earthquake focal depth were, therefore, unnecessary.

   Richter's original magnitude scale (ML) was then extended to observations of earthquakes
of any distance and of focal depths ranging between 0 and 700 km. Because earthquakes
excite both body waves, which travel into and through the Earth, and surface waves, which
are constrained to follow the natural wave guide of the Earth's uppermost layers, two
magnitude scales evolved - the mb and MS scales.

  The standard body-wave magnitude formula is

  mb = log10(A/T) + Q(D,h) ,

  where A is the amplitude of ground motion (in microns); T is the corresponding period (in
seconds); and Q(D,h) is a correction factor that is a function of distance, D (degrees), between
epicenter and station and focal depth, h (in kilometers), of the earthquake. The standard
surface-wave formula is

  MS = log10 (A/T) + 1.66 log10 (D) + 3.30 .

  There are many variations of these formulas that take into account effects of specific
geographic regions, so that the final computed magnitude is reasonably consistent with
Richter's original definition of ML. Negative magnitude values are permissible.

   A rough idea of frequency of occurrence of large earthquakes is given by the following
table:

     MS     Earthquakes
             per year
 ---------- -----------
 8.5 - 8.9      0.3
 8.0 - 8.4      1.1
 7.5 - 7.9      3.1
 7.0 - 7.4     15
 6.5 - 6.9     56
 6.0 - 6.4    210

   This table is based on data for a recent 47 year period. Perhaps the rates of earthquake
occurrence are highly variable and some other 47 year period could give quite different
results.
  The original mb scale utilized compressional body P-wave amplitudes with periods of 4-5 s,
but recent observations are generally of 1 s-period P waves. The MS scale has consistently
used Rayleigh surface waves in the period range from 18 to 22 s.

   When initially developed, these magnitude scales were considered to be equivalent; in other
words, earthquakes of all sizes were thought to radiate fixed proportions of energy at different
periods. But it turns out that larger earthquakes, which have larger rupture surfaces,
systematically radiate more long-period energy. Thus, for very large earthquakes, body-wave
magnitudes badly underestimate true earthquake size; the maximum body-wave magnitudes
are about 6.5 - 6.8. In fact, the surface-wave magnitudes underestimate the size of very large
earthquakes; the maximum observed values are about 8.3 - 8.7. Some investigators have
suggested that the 100 s mantle Love waves (a type of surface wave) should be used to
estimate magnitude of great earthquakes. However, even this approach ignores the fact that
damage to structure is often caused by energy at shorter periods. Thus, modern seismologists
are increasingly turning to two separate parameters to describe the physical effects of an
earthquake: seismic moment and radiated energy.

Fault Geometry and Seismic Moment, MO

   The orientation of the fault, direction of fault movement, and size of an earthquake can be
described by the fault geometry and seismic moment. These parameters are determined from
waveform analysis of the seismograms produced by an earthquake. The differing shapes and
directions of motion of the waveforms recorded at different distances and azimuths from the
earthquake are used to determine the fault geometry, and the wave amplitudes are used to
compute moment. The seismic moment is related to fundamental parameters of the faulting
process.

  MO = µS‹d› ,

   where µ is the shear strength of the faulted rock, S is the area of the fault, and <d> is the
average displacement on the fault. Because fault geometry and observer azimuth are a part of
the computation, moment is a more consistent measure of earthquake size than is magnitude,
and more importantly, moment does not have an intrinsic upper bound. These factors have led
to the definition of a new magnitude scale MW, based on seismic moment, where

  MW = 2/3 log10(MO) - 10.7 .

  The two largest reported moments are 2.5 X 1030 dyn·cm (dyne·centimeters) for the 1960
Chile earthquake (MS 8.5; MW 9.6) and 7.5 X 1029 dyn·cm for the 1964 Alaska earthquake
(MS 8.3; MW 9.2). MS approaches it maximum value at a moment between 1028 and 1029
dyn·cm.

Energy, E

   The amount of energy radiated by an earthquake is a measure of the potential for damage to
man-made structures. Theoretically, its computation requires summing the energy flux over a
broad suite of frequencies generated by an earthquake as it ruptures a fault. Because of
instrumental limitations, most estimates of energy have historically relied on the empirical
relationship developed by Beno Gutenberg and Charles Richter:

  log10E = 11.8 + 1.5MS
   where energy, E, is expressed in ergs. The drawback of this method is that MS is computed
from an bandwidth between approximately 18 to 22 s. It is now known that the energy
radiated by an earthquake is concentrated over a different bandwidth and at higher
frequencies. With the worldwide deployment of modern digitally recording seismograph with
broad bandwidth response, computerized methods are now able to make accurate and explicit
estimates of energy on a routine basis for all major earthquakes. A magnitude based on energy
radiated by an earthquake, Me, can now be defined,

  Me = 2/3 log10E - 2.9.

  For every increase in magnitude by 1 unit, the associated seismic energy increases by about
32 times.

  Although Mw and Me are both magnitudes, they describe different physical properites of the
earthquake. Mw, computed from low-frequency seismic data, is a measure of the area ruptured
by an earthquake. Me, computed from high frequency seismic data, is a measure of seismic
potential for damage. Consequently, Mw and Me often do not have the same numerical value.

Intensity

   The increase in the degree of surface shaking (intensity) for each unit increase of
magnitude of a shallow crustal earthquake is unknown. Intensity is based on an earthquake's
local accelerations and how long these persist. Intensity and magnitude thus both depend on
many variables that include exactly how rock breaks and how energy travels from an
earthquake to a receiver. These factors make it difficult for engineers and others who use
earthquake intensity and magnitude data to evaluate the error bounds that may exist for their
particular applications.

  An example of how local soil conditions can greatly influence local intensity is given by
catastrophic damage in Mexico City from the 1985, MS 8.1 Mexico earthquake centered some
300 km away. Resonances of the soil-filled basin under parts of Mexico City amplified
ground motions for periods of 2 seconds by a factor of 75 times. This shaking led to selective
damage to buildings 15 - 25 stories high (same resonant period), resulting in losses to
buildings of about $4.0 billion and at least 8,000 fatalities.

   The occurrence of an earthquake is a complex physical process. When an earthquake
occurs, much of the available local stress is used to power the earthquake fracture growth to
produce heat rather that to generate seismic waves. Of an earthquake system's total energy,
perhaps 10 percent to less that 1 percent is ultimately radiated as seismic energy. So the
degree to which an earthquake lowers the Earth's available potential energy is only
fractionally observed as radiated seismic energy.

Determining the Depth of an Earthquake

  Earthquakes can occur anywhere between the Earth's surface and about 700 kilometers
below the surface. For scientific purposes, this earthquake depth range of 0 - 700 km is
divided into three zones: shallow, intermediate, and deep.

  Shallow earthquakes are between 0 and 70 km deep; intermediate earthquakes, 70 - 300 km
deep; and deep earthquakes, 300 - 700 km deep. In general, the term "deep-focus
earthquakes" is applied to earthquakes deeper than 70 km. All earthquakes deeper than 70 km
are localized within great slabs of shallow lithosphere that are sinking into the Earth's mantle.

   The evidence for deep-focus earthquakes was discovered in 1922 by H.H. Turner of
Oxford, England. Previously, all earthquakes were considered to have shallow focal depths.
The existence of deep-focus earthquakes was confirmed in 1931 from studies of the
seismograms of several earthquakes, which in turn led to the construction of travel-time
curves for intermediate and deep earthquakes.

   The most obvious indication on a seismogram that a large earthquake has a deep focus is
the small amplitude, or height, of the recorded surface waves and the uncomplicated character
of the P and S waves. Although the surface-wave pattern does generally indicate that an
earthquake is either shallow or may have some depth, the most accurate method of
determining the focal depth of an earthquake is to read a depth phase recorded on the
seismogram. The most characteristic depth phase is pP. This is the P wave that is reflected
from the surface of the Earth at a point relatively near the epicenter. At distant seismograph
stations, the pP follows the P wave by a time interval that changes slowly with distance but
rapidly with depth. This time interval, pP-P (pP minus P), is used to compute depth-of-focus
tables. Using the time difference of pP-P as read from the seismogram and the distance
between the epicenter and the seismograph station, the depth of the earthquake can be
determined from published travel-time curves or depth tables.

   Another seismic wave used to determine focal depth is the sP phase - an S wave reflected
as a P wave from the Earth's surface at a point near the epicenter. This wave is recorded after
the pP by about one-half of the pP-P time interval. The depth of an earthquake can be
determined from the sP phase in the same manner as the pP phase by using the appropriate
travel-time curves or depth tables for sP.

  If the pP and sP waves can be identified on the seismogram, an accurate focal depth can be
determined.

by William Spence, Stuart A. Sipkin, and George L. Choy
Earthquakes and Volcanoes
Volume 21, Number 1, 1989

http://earthquake.usgs.gov/learning/topics/measure.php

What Are Aftershocks, Foreshocks and Earthquake
Clusters?
The calculations in this system are based on known behaviors of aftershocks. Scientists have
shown that the rules governing aftershock behavior also apply to ―aftershocks‖ that are larger
than their main shock - i.e., the possibility that the first event was a foreshock. These rules
include:

Aftershock Facts: In a cluster, the earthquake with the largest magnitude is called the main
shock; anything before it is a foreshock and anything after it is an aftershock. A main shock
will be redefined as a foreshock if a subsequent event has a larger magnitude. The rate of
main shocks after foreshocks follows the same patterns as aftershocks after main shocks.
Aftershock sequences follow predictable patterns as a group, although the individual
earthquakes are random and unpredictable. This pattern tells us that aftershocks decay with
increasing time, increasing distance, and increasing magnitude. It is this average pattern that
this system uses to make real-time predictions about the probability of ground shaking.

Distance: Aftershocks usually occur geographically near the main shock. The stress on the
main shock's fault changes drastically during the main shock and that fault produces most of
the aftershocks. Sometimes the change in stress caused by the main shock is great enough to
trigger aftershocks on other, nearby faults, and for a very large main shock sometimes even
farther away. As a rule of thumb, we call earthquakes aftershocks if they are at a distance
from the main shock's fault no greater than the length of that fault. The automatic system
keeps track of where aftershocks have occurred, and when enough aftershocks have been
recorded to pinpoint the more and less active locations, the system adjusts the probabilities on
the map to reflect those local variations.




(Click for image for a larger version)

Time: An earthquake large enough to cause damage will probably be followed by several felt
aftershocks within the first hour. The rate of aftershocks decreases quickly - the decrease is
proportional to the inverse of time since the main shock. This means the second day has about
1/2 the number of aftershocks of the first day and the tenth has about 1/10 the number of the
first day. These patterns describe only the overall behavior of aftershocks; the actual times,
numbers and locations of the aftershocks are random. We call an earthquake an aftershock as
long as the rate at which earthquakes occur in that region is greater than the rate before the
main shock. How long this lasts depends on the size of the main shock (bigger earthquakes
have more aftershocks) and how active the region was before the main shock (if the region
was seismically quiet before the main shock, the aftershocks continue above the previous rate
for a longer time). Thus, an aftershock can occur weeks or decades after a main shock.

Magnitude: Bigger earthquakes have more and larger aftershocks. The bigger the main shock
the bigger the largest aftershock will be, on average. The difference in magnitude between the
main shock and largest aftershock ranges from 0.1 to 3 or more, but averages 1.2 (a M5.5
aftershock to a M6.7 main shock for example). There are more small aftershocks than large
ones. Aftershocks of all magnitudes decrease at the same rate, but because the large
aftershocks are already less frequent, the decay can be noticed more quickly. Large
aftershocks can occur months or even years after the main shock.
http://earthquake.usgs.gov/earthquakes/step/explain.php



Repeating Earthquakes




Figure 1. East-west and north-south components of ground motion for the 1922 and 1934
Parkfield events recorded at Berkeley, California.

Do earthquakes occur completely randomly, or do they have a pattern that tends to repeat? If
they're random, there's no hope for earthquake prediction. But as we saw in the earthquake
history, regular repetitions of the same rupture event, called a "characteristic earthquake" may
be occurring at Parkfield.

Indeed, this repetition was part of the basis for developing the Parkfield experiment. Figure 1
shows seismograms from two of the Parkfield earthquakes, both recorded in Berkeley,
California. Again, the similarity of the first few cycles of the P and S waves means that these
events ruptured the same part of the fault.

Adding to the sense of repetition, similar-size foreshocks occurred 17 minutes before both the
1934 and 1966 Parkfield earthquakes. The foreshocks produced nearly identical Wood-
Anderson seismograms near Berkeley California.

The repeating rupture of a single section of a fault is conceived in physical models as a "stuck
patch" on the fault's surface, around which strain energy builds up. When the patch ruptures,
the strain is released and the loading process begins again. If the loading rate (plate motion) is
constant, the timing of the earthquakes would be regular.

Apparent repetition on a smaller scale

Cole and Ellsworth (1995) have extended this observation to show that almost all M>4
earthquakes near Parkfield can be grouped into classes, each characterized by a distinctive
seismogram and presumably repeatedly rupturing the same fault patch despite the occurrence
of two M 6 earthquakes. The 1992 and November, 1993 events fit the formal criteria for
potential Parkfield foreshocks (see status) and prompted public advisories of heightened
probability of an M 6 earthquake. The connection of these events to the characteristic
earthquakes at Parkfield is suggested by the fact that waveforms from the November, 1993
event were identical to those from a foreshock three days before the 1966 Parkfield
mainshock. However, none of the earthquakes in Table 1 appears to be a repeat of the "17-
minute" foreshocks.
  Table 1. Earthquakes (M 4 and larger) in Parkfield since 1986.
   Date     Latitude Longitude Depth (km) Magnitude (coda)
1992/10/20 35.9285°N 120.4728°W 10.21           4.3
1993/04/04 35.9413°N 120.4925°W 7.65            4.2
1993/11/14 35.9527°N 120.4968°W 11.70           4.6
1994/12/20 35.9175°N 120.4643°W 9.10            4.7

Earthquake repetition on a "micro" scale

Since 1990, the high resolution seismic network at Parkfield (HRSN) has recorded
earthquakes as small as M=0 on this stretch of the fault. Nadeau et al. (1994) showed that
about half of the events (0.2 < Mw < 1.3) occur in about 300 distinct spatial clusters and have
highly similar waveforms (Figure 2). Within clusters, relative locations based on wave form
cross-correlation are accurate to within 5 m (Nadeau, 1997). Many of these clusters have
characteristic recurrence times that appear to scale with the magnitude of the repeating events,
supporting a simple model of stick slip on a stuck patch. Variations in the characteristic
recurrence time were tentatively interpreted as indicating variations in slip rate on the fault.




Figure 2. Vertical component seismograms from clustered microearthquakes on the San
Andreas fault at Parkfield. Three types of events (numbers on left) are identified on the basis
of subtle differences in the waveform.

Rubin et al. (1999) and Waldhauser et al. (1999) determined that these clusters are largely
confined to distinct, nearly-horizontal "streaks" on the fault surface. The underlying cause of
this structural organization on the fault surface remains to be determined.

These repeating microearthquakes also present an enigma. The simple model of a stuck patch
fails to explain the occurrence rate of the smallest events. The time intervals between their
recurrences are far too long (Nadeau and Johnson, 1998). Several models have been advanced
to explain the mechanics of these repeating microearthquakes. We hope to test them directly
by drilling into the rupture zone of one of these events as part of the National Science
Foundation's EarthScope initiative.

Other interpretations of the data include Tullis (1999), who suggests that the micro-
earthquake clusters may represent lithologically distinct sites that have velocity-weakening
frictional behavior. Boatwright and Cocco (1996) suggest that a mix of velocity-weakening
and velocity-strengthing behavior may occur where the fault creeps macroscopically while
producing small earthquakes.

The picture emerging from these studies is that, at least on the Parkfield section of the San
Andreas fault, earthquakes tend to occur as repeated ruptures of fixed patches of the fault, in
approximate accordance with simple stick slip models, over a wide range physical sizes.

http://earthquake.usgs.gov/research/parkfield/repeat.php



How Do I Locate That Earthquake's Epicenter?

To figure out just where that earthquake happened, you need to look at your seismogram and
you need to know what at least two other seismographs recorded for the same earthquake.
You will also need a map of the world, a ruler, a pencil, and a compass for drawing circles on
the map.

Here's an example of a seismogram:




Figure 1 - Our typical seismogram from before, this time marked for this exercise (from Bolt,
1978).




One minute intervals are marked by the small lines printed just above the squiggles made by
the seismic waves (the time may be marked differently on some seismographs). The distance
between the beginning of the first P wave and the first S wave tells you how many seconds
the waves are apart. This number will be used to tell you how far your seismograph is from
the epicenter of the earthquake.

Finding the Distance to the Epicenter and the Earthquake's Magnitude
                                                         1. Measure the distance between the
                                                            first P wave and the first S wave. In
                                                            this case, the first P and S waves are
                                                            24 seconds apart.
                                                         2. Find the point for 24 seconds on the
                                                            left side of the chart below and
                                                            mark that point. According to the
                                                            chart, this earthquake's epicenter
                                                            was 215 kilometers away.
                                                         3. Measure the amplitude of the
                                                            strongest wave. The amplitude is
                                                            the height (on paper) of the
                                                            strongest wave. On this
                                                            seismogram, the amplitude is 23
                                                            millimeters. Find 23 millimeters on
                                                            the right side of the chart and mark
                                                            that point.
Figure 2 - Use the amplitude to derive the               4. Place a ruler (or straight edge) on
magnitude of the earthquake, and the distance               the chart between the points you
from the earthquake to the station. (from Bolt,             marked for the distance to the
1978)                                                       epicenter and the amplitude. The
                                                            point where your ruler crosses the
                                                            middle line on the chart marks the
                                                            magnitude (strength) of the
                                                            earthquake. This earthquake had a
                                                            magnitude of 5.0.



Finding the Epicenter

You have just figured out how far your seismograph is from the epicenter and how strong the
earthquake was, but you still don't know exactly where the earthquake occurred. This is where
the compass, the map, and the other seismograph records come in.

                                                            1. Check the scale on your map. It
                                                               should look something like a
                                                               piece of a ruler. All maps are
                                                               different. On your map, one
                                                               centimeter could be equal to 100
                                                               kilometers or something like
                                                               that.
                                                            2. Figure out how long the distance
                                                               to the epicenter (in centimeters)
                                                               is on your map. For example,
                                                               say your map has a scale where
                                                               one centimeter is equal to 100
Figure 3 - The point where the three circles intersect         kilometers. If the epicenter of the
is the epicenter of the earthquake. This technique is          earthquake is 215 kilometers
called 'triangulation.'                                        away, that equals 2.15
                                                               centimeters on the map.
                                                            3. Using your compass, draw a
                                                               circle with a radius equal to the
                                                               number you came up with in
                                                               Step #2 (the radius is the
                                                               distance from the center of a
                                                               circle to its edge). The center of
                                                               the circle will be the location of
                                                               your seismograph. The epicenter
                                                               of the earthquake is somewhere
                                                               on the edge of that circle.




4. Do the same thing for the distance to the epicenter that the other seismograms recorded
(with the location of those seismographs at the center of their circles). All of the circles should
overlap. The point where all of the circles overlap is the approximate epicenter of the
earthquake.

How Are Earthquake Magnitudes Measured?

The Richter Scale

                                The magnitude of most earthquakes is measured on the Richter
                                scale, invented by Charles F. Richter in 1934. The Richter
                                magnitude is calculated from the amplitude of the largest seismic
                                wave recorded for the earthquake, no matter what type of wave
                                was the strongest.

                                The Richter magnitudes are based on a logarithmic scale (base
                                10). What this means is that for each whole number you go up on
                                the Richter scale, the amplitude of the ground motion recorded by
                                a seismograph goes up ten times. Using this scale, a magnitude 5
                                earthquake would result in ten times the level of ground shaking
                                as a magnitude 4 earthquake (and 32 times as much energy would
Figure 1 - Charles Richter      be released). To give you an idea how these numbers can add up,
studying a seismogram.          think of it in terms of the energy released by explosives: a
                                magnitude 1 seismic wave releases as much energy as blowing up
                                6 ounces of TNT. A magnitude 8 earthquake releases as much
                                energy as detonating 6 million tons of TNT. Pretty impressive,
                                huh? Fortunately, most of the earthquakes that occur each year
                                are magnitude 2.5 or less, too small to be felt by most people.

The Richter magnitude scale can be used to desribe earthquakes so small that they are
expressed in negative numbers. The scale also has no upper limit, so it can describe
earthquakes of unimaginable and (so far) unexperienced intensity, such as magnitude 10.0 and
beyond.

Although Richter originally proposed this way of measuring an earthquake's "size," he only
used a certain type of seismograph and measured shallow earthquakes in Southern California.
Scientists have now made other "magnitude" scales, all calibrated to Richter's original
method, to use a variety of seismographs and measure the depths of earthquakes of all sizes.

Here's a table describing the magnitudes of earthquakes, their effects, and the estimated
number of those earthquakes that occur each year.

The Mercalli Scale

                               Another way to measure the strength of an earthquake is to
                               use the Mercalli scale. Invented by Giuseppe Mercalli in
                               1902, this scale uses the observations of the people who
                               experienced the earthquake to estimate its intensity.

                               The Mercalli scale isn't considered as scientific as the Richter
                               scale, though. Some witnesses of the earthquake might
                               exaggerate just how bad things were during the earthquake
                               and you may not find two witnesses who agree on what
                               happened; everybody will say something different. The
                               amount of damage caused by the earthquake may not
                               accurately record how strong it was either.
Figure 2 - Giuseppe Mercalli




Some things that affect the amount of damage that occurs are:

      the building designs,
      the distance from the epicenter,
      and the type of surface material (rock or dirt) the buildings rest on.

Different building designs hold up differently in an earthquake and the further you are from
the earthquake, the less damage you'll usually see. Whether a building is built on solid rock or
sand makes a big difference in how much damage it takes. Solid rock usually shakes less than
sand, so a building built on top of solid rock shouldn't be as damaged as it might if it was
sitting on a sandy lot.

How Do I Read a Seismogram?

When you look at a seismogram, there will be wiggly lines all across it. These are all the
seismic waves that the seismograph has recorded. Most of these waves were so small that
nobody felt them. These tiny microseisms can be caused by heavy traffic near the
seismograph, waves hitting a beach, the wind, and any number of other ordinary things that
cause some shaking of the seismograph. There may also be some little dots or marks evenly
spaced along the paper. These are marks for every minute that the drum of the seismograph
has been turning. How far apart these minute marks are will depend on what kind of
seismograph you have.
Figure 1 - A typical seismogram.




So which wiggles are the earthquake? The P wave will be the first wiggle that is bigger than
the rest of the little ones (the microseisms). Because P waves are the fastest seismic waves,
they will usually be the first ones that your seismograph records. The next set of seismic
waves on your seismogram will be the S waves. These are usually bigger than the P waves.

                                                    If there aren't any S waves marked on your
                                                    seismogram, it probably means the
                                                    earthquake happened on the other side of
                                                    the planet. S waves can't travel through the
                                                    liquid layers of the earth so these waves
                                                    never made it to your seismograph.




Figure 2 - A cross-section of the earth, with
earthquake wave paths defined and their shadow-
zones highlighted.




The surface waves (Love and Rayleigh waves) are the other, often larger, waves marked on
the seismogram. They have a lower frequency, which means that waves (the lines; the ups-
and-downs) are more spread out. Surface waves travel a little slower than S waves (which, in
turn, are slower than P waves) so they tend to arrive at the seismograph just after the S waves.
For shallow earthquakes (earthquakes with a focus near the surface of the earth), the surface
waves may be the largest waves recorded by the seismograph. Often they are the only waves
recorded a long distance from medium-sized earthquakes.
n order to accurately record earthquake waves at a seismic station at least three
seismographs are needed: one each for E-W, N-S, and vertical motion.




                                                                               By examining
seismograms at 3 different recording stations, it is possible to "triangulate " the epicenter
of an earthquake.




http://www.sciencecourseware.com/VirtualEarthquake/




How does a seismograph work? What is the Richter scale?
Browse the article How does a seismograph work? What is the Richter scale?

How does a seismograph work? What is the Richter scale?

Natural Disasters Image Gallery
Jason Reed/Getty Images
The Richter scale is a logarithmic scale, meaning that the numbers on the scale measure
factors of 10. See more pictures of natural disasters.

A seismograph is the device that scientists use to measure earthquakes. The goal of a
seismograph is to accurately record the motion of the ground during a quake. If you live in a
city, you may have noticed that buildings sometimes shake when a big truck or a subway train
rolls by. Good seismographs are therefore isolated and connected to bedrock to prevent this
sort of "data pollution."

Up Next

      How Earthquakes Work
      How Volcanoes Work
      Discovery.com: Create Your Own Earthquake


The main problem that must be solved in creating a seismograph is that when the ground
shakes, so does the instrument. Therefore, most seismographs involve a large mass of some
sort. You could make a very simple seismograph by hanging a large weight from a rope over
a table. By attaching a pen to the weight and taping a piece of paper to the table so that the
pen can draw on the paper, you could record tremors in the Earth's crust (earthquakes). If you
used a roll of paper and a motor that slowly pulled the paper across the table, you would be
able to record tremors over time. However, it would take a pretty large tremor for you to see
anything. In a real seismograph, levers or electronics are used to magnify the signal so that
very small tremors are detectable. A big mechanical seismograph may have a weight attached
that weighs 1,000 pounds (450 kg) or more, and it drives a set of levers that significantly
magnify the pen's motion.

The Richter scale is a standard scale used to compare earthquakes. It is a logarithmic scale,
meaning that the numbers on the scale measure factors of 10. So, for example, an earthquake
that measures 4.0 on the Richter scale is 10 times larger than one that measures 3.0. On the
Richter scale, anything below 2.0 is undetectable to a normal person and is called a
microquake. Microquakes occur constantly. Moderate earthquakes measure less than 6.0 or
so on the Richter scale. Earthquakes measuring more than 6.0 can cause significant damage.
The biggest quake in the world since 1900 scored a 9.5 on the Richter scale. It rocked Chile
on May 22, 1960.
http://science.howstuffworks.com/question142.htm/printable

Is the reliable prediction of individual earthquakes a
realistic scientific goal?
                                                                                         IAN MAIN

The recent earthquake in Colombia (Fig. 1) has once again illustrated to the general
public the inability of science to predict such natural catastrophes. Despite the
significant global effort that has gone into the investigation of the nucleation process of
earthquakes, such events still seem to strike suddenly and without obvious warning. Not
all natural catastrophes are so apparently unpredictable, however.

                           Figure 1 Devastation caused by the recent earthquake in Colombia.

                           High resolution image and legend (359k)




For example, the explosive eruption of Mount St Helens in 1980 was preceded by visible
ground deformation of up to 1 metre per day, by eruptions of gas and steam, and by thousands
of small earthquakes, culminating in the magnitude 5 event that finally breached the carapace.
In this example, nearly two decades ago now, the general public had been given official
warning of the likelihood of such an event, on a timescale of a few months. So, if other
sudden onset natural disasters can be predicted to some degree, what is special about
earthquakes? Why have no unambiguous, reliable precursors been observed, as they
commonly are in laboratory tests (see, for example, Fig. 2)? In the absence of reliable,
accurate prediction methods, what should we do instead? How far should we go in even trying
to predict earthquakes?

                                                            21           22
                           Figure 2 Comparison of laboratory and field        measurements of
                           precursory strain (solid lines).

                           High resolution image and legend (100k)



The idea that science cannot predict everything is not new; it dates back to the 1755 Great
Lisbon earthquake, which shattered contemporary European belief in a benign, predictable
Universe1. In the eighteenth century 'Age of Reason', the picture of a predictable Universe1
was based on the spectacular success of linear mathematics, such as Newton's theory of
gravitation. The history of science during this century has to some extent echoed this earlier
debate. Theories from the earlier part of the century, such as Einstein's relativity, and the
development of quantum mechanics, were found to be spectacularly, even terrifyingly,
successful when tested against experiment and observation. Such success was mirrored in the
increasing faith that the general public placed in science. However, the century is closing with
the gradual realization by both practitioners and the general public that we should not expect
scientific predictions to be infallible. Even simple nonlinear systems can exhibit 'chaotic'
behaviour, whereas more 'complex' nonlinear systems, with lots of interacting elements, can
produce remarkable statistical stability while retaining an inherently random (if not
completely chaotic) component2. The null hypothesis to be disproved is not that earthquakes
are predictable, but that they are not.

The question to be addressed in this debate is whether the accurate, reliable prediction of
individual earthquakes is a realistic scientific goal, and, if not, how far should we go in
attempting to assess the predictability of the earthquake generation process? Recent research
and observation have shown that the process of seismogenesis is not completely random —
earthquakes tend to be localized in space, primarily on plate boundaries, and seem to be
clustered in time more than would be expected for a random process. The scale-invariant
nature of fault morphology, the earthquake frequency-magnitude distribution, the
spatiotemporal clustering of earthquakes, the relatively constant dynamic stress drop, and the
apparent ease with which earthquakes can be triggered by small perturbations in stress are all
testament to a degree of determinism and predictability in the properties of earthquake
populations3,4. The debate here centres on the prediction of individual events.

For the purposes of this debate, we define a sliding scale of earthquake 'prediction' as follows.

   1. Time-independent hazard. We assume that earthquakes are a random (Poisson)
      process in time, and use past locations of earthquakes, active faults, geological
      recurrence times and/or fault slip rates from plate tectonic or satellite data to constrain
      the future long-term seismic hazard5. We then calculate the likely occurrence of
      ground-shaking from a combination of source magnitude probability with path and site
      effects, and include a calculation of the associated errors. Such calculations can also
      be used in building design and planning of land use, and for the estimation of
      earthquake insurance.



   2. Time-dependent hazard. Here we accept a degree of predictability in the process, in
      that the seismic hazard varies with time. We might include linear theories, where the
      hazard increases after the last previous event6, or the idea of a 'characteristic
      earthquake' with a relatively similar magnitude, location and approximate repeat time
      predicted from the geological dating of previous events7. Surprisingly, the tendency of
      earthquakes to cluster in space and time include the possibility of a seismic hazard that
      actually decreases with time8. This would allow the refinement of hazard to include
      the time and duration of a building's use as a variable in calculating the seismic risk.



   3. Earthquake forecasting. Here we would try to predict some of the features of an
      impending earthquake, usually on the basis of the observation of a precursory signal.
      The prediction would still be probabilistic, in the sense that the precise magnitude,
      time and location might not be given precisely or reliably, but that there is some
      physical connection above the level of chance between the observation of a precursor
      and the subsequent event. Forecasting would also have to include a precise statement
      of the probabilities and errors involved, and would have to demonstrate more
      predictability than the clustering referred to in time-dependent hazard. The practical
      utility of this would be to enable the relevant authorities to prepare for an impending
       event on a timescale of months to weeks. Practical difficulties include identifying
       reliable, unambiguous precursors9-11, and the acceptance of an inherent proportion of
       missed events or false alarms, involving evacuation for up to several months at a time,
       resulting in a loss of public confidence.



   4. Deterministic prediction. Earthquakes are inherently predictable. We can reliably
      know in advance their location (latitude, longitude and depth), magnitude, and time of
      occurrence, all within narrow limits (again above the level of chance), so that a
      planned evacuation can take place.

Time-independent hazard has now been standard practice for three decades, although new
information from geological and satellite data is increasingly being used as a constraint. In
contrast, few seismologists would argue that deterministic prediction as defined above is a
reasonable goal in the medium term, if not for ever12. In the USA, the emphasis has long been
shifted to a better fundamental understanding of the earthquake process, and on an improved
calculation of the seismic hazard, apart from an unsuccessful attempt to monitor precursors to
an earthquake near Parkfield, California, which failed to materialize on time. In Japan,
particularly in the aftermath of the Kobe earthquake in 1995, there is a growing realization
that successful earthquake prediction might not be realistic13. In China, thirty false alarms
have brought power lines and business operations to a standstill in the past three years,
leading to recent government plans to clamp down on unofficial 'predictions'14.

So, if we cannot predict individual earthquakes reliably and accurately with current
knowledge15-20, how far should we go in investigating the degree of predictability that might
exist?

Ian Main
Department of Geology and Geophysics, University of Edinburgh, Edinburgh, UK


References

   1. Voltaire, Candide (Penguin, London, 1997, first published 1759).
   2. Bak, P. How Nature Works: The Science of Self-organised Criticality (Oxford Univ.
      Press, 1997).
   3. Turcotte, D.L. Fractals and Chaos in Geology and Geophysics (Cambridge Univ.
      Press, 1991).
   4. Main, I., Statistical physics, seismogenesis and seismic hazard, Rev. Geophys. 34, 433-
      462 (1996).
   5. Reiter, L. Earthquake Hazard Analysis (Columbia Univ. Press, New York, 1991).
   6. Shimazaki, K. & Nakata, T., Time-predictable recurrence model for large earthquakes,
      Geophys. Res. Lett. 7, 279-283 (1980).
   7. Schwartz, D.P. & Coppersmith, K.J., Fault behavior and characteristic earthquakes:
      Examples from the Wasatch and San Andreas fault systems, J. Geophys. Res. 89,
      5681-5696 (1984).
   8. Davis, P.M., Jackson, D.D. & Kagan, Y.Y., The longer its been since the last
      earthquake, the longer the expected time till the next?, Bull. Seism. Soc. Am. 79, 1439-
      1456 (1989).
   9. Wyss, M., Second round of evaluation of proposed earthquake precursors, Pure Appl.
       Geophys. 149, 3-16 (1991).
   10. Campbell, W.H. A misuse of public funds: UN support for geomagnetic forecasting of
       earthquakes and meteorological disasters, Eos Trans. Am. Geophys. Union 79, 463-
       465 (1998).
   11. Scholz, C.H. The Mechanics of Earthquakes and Faulting (Cambridge Univ. Press,
       1990).
   12. Main, I., Earthquakes - Long odds on prediction, Nature 385, 19-20 (1997).
   13. Saegusa, A., Japan tries to understand quakes, not predict them, Nature 397, 284
       (1999).
   14. Saegusa, A., China clamps down on inaccurate warnings, Nature 397, 284 (1999).
   15. Macelwane, J.B., Forecasting earthquakes, Bull. Seism. Soc. Am. 36, 1-4 (1946).
   16. Turcotte, D.H., Earthquake prediction, A. Rev. Earth Planet. Sci. 19, 263-281 (1991).
   17. Sneider, R. & van Eck, T., Earthquake prediction: a political problem?, Geol. Rdsch.
       86, 446-463 (1997).
   18. Jordan, T.H., Is the study of earthquakes a basic science?, Seismol. Res. Lett. 68, 259-
       261 (1997).
   19. Evans, R., Asessment of schemes for earthquake prediction: editor's introduction,
       Geophys. J. Int. 131, 413-420 (1997).
   20. Geller, R.J., Earthquake prediction: a critical review, Geophys. J. Int. 131 425-450
       (1997).
   21. Main, I.G., Sammonds P.R. & Meredith, P.G., Application of a modified Griffith
       criterion to the evolution of fractal damage during compressional rock failure,
       Geophys. J. Int. 115, 367-380 (1993).
   22. Argus, D. & Lyzenga, G.A., Site velocities before and after the Loma Prieta and the
       Gulf of Alaska earthquakes determined from VLBI, Geophys. Res. Lett. 21, 333-336
       (1994).

http://www.nature.com/nature/debates/earthquake/




Seismology: The start of something big?
Rachel Abercrombie1

Top of page

Abstract

Can we predict the final size of an earthquake from observations of its first few seconds? An
extensive study of earthquakes around the Pacific Rim seems to indicate that we can — but
uncertainties remain.

How does a seismic fault, initially essentially immobile, start to slip at speeds of metres per
second as an earthquake rupture front runs along it at speeds of up to 3 kilometres per second?
Does the eventual size of an earthquake depend on the nature of this process? Or do all
earthquakes begin in the same way, with the extent of rupture determined by conditions along
the fault? Such fundamental questions get seismologists talking, because knowing how
earthquakes begin is an essential part of understanding and modelling the dynamics of
earthquake rupture, and may allow an earthquake's course to be predicted. Research until now
has been inconclusive, but results described by Olson and Allen (page 212 of this issue)1
imply that the final magnitude of an earthquake depends at least partially on what happens in
its first few seconds. This timescale is equivalent to less than a tenth of the duration of the
larger earthquakes in their study.

Research into the onset of earthquakes large and small has found that they often begin with
small-amplitude shaking2. The interpretation of these initial 'sub-events' remains
controversial. One model has it that a small, isolated sub-event triggers a larger fault patch,
which itself triggers further fault patches, and so on as long as sufficient energy is available.
In this 'cascade' model, the beginning of a large earthquake is no different from the beginning
of a small earthquake: therefore, predicting the final magnitude from the first few seconds is
impossible. An alternative model is that the small beginning is the last phase of some longer,
slower, sub-seismic 'nucleation' process. (Such a process has admittedly never been reliably
observed3.) In this case, the final magnitude would be related to the nature of the nucleation
process, and seismograms of large earthquakes would look different from those of smaller
earthquakes right from the start.

Early warning systems currently in operation in Japan, Taiwan and Mexico use observations
of the earliest-arriving primary (P) waves to provide a few seconds' warning of subsequent
large ground motion — secondary (S) and surface waves — produced by the same
earthquake. In an earlier study4, Allen and Kanamori investigated the first few seconds of
earthquake seismograms in southern California. They found that the predominant period (a
measure of the frequency) for the first 4 seconds of the P waves provides a good estimate of
the size of earthquakes with a magnitude M of less than 6. The duration of such earthquakes,
defined as the time during which the fault actually moves, is usually less than 4 seconds (the
waves generated by an earthquake last for much longer than the earthquake itself).
Intriguingly, however, Allen and Kanamori's method4 also predicted the approximate
magnitude of three earthquakes of M greater than 6, and so an earthquake duration of more
than 4 seconds. In other words, the final size of the earthquake could be predicted before the
fault stopped moving.

Olson and Allen1 set out to determine whether the final magnitude of the earthquake really
does depend on the predominant period of the onset. They investigated the first few seconds
of 71 earthquakes from California, Alaska, Japan and Taiwan, each recorded at multiple
stations within 100 kilometres of the epicentres. Twenty-four of the earthquakes had a
magnitude larger than 6, with durations of up to 70 seconds. Estimating the predominant
period of the radiated seismic energy for each earthquake, the authors find that this value
increases with magnitude for earthquakes of M between 3 and 8. This finding applies even to
larger earthquakes in which the measurement is made after as little as a tenth of the
earthquake's total duration — suggesting that the final magnitude of an earthquake is indeed
determined a very short time after onset.

Previous studies of earthquake onsets have been limited by the lack of seismometers located
close to the epicentre, and by the fact that standard techniques cannot analyse the frequency
content of such short pieces of the seismograms. The method5 used by Olson and Allen1, and
by Allen and Kanamori before them4, is simple but effective. They calculate the predominant
period from the ratio of the ground displacement to the rate of change of that displacement
(the velocity of the movement) point by point. This measurement can be made as a
seismogram is recorded, and at seismometers up to 100 kilometres from an earthquake's
epicentre.
As Olson and Allen note, there is considerable scatter in their results; this leads to large
uncertainties, especially in measurements at individual seismometers. An individual
measurement of a predominant period of 1 second, for example, is consistent with an
earthquake of any magnitude between 3 and 7.5. Most measurements of earthquake
parameters vary significantly between seismometers, but even after averaging over many
stations, any measurement of the mean predominant period produces an uncertainty of at least
one magnitude unit. The predominant period of the 1999 earthquake in Hector Mine,
California (M=7.1; Fig. 1), for instance, is the same as that of an earthquake of M less than 5.

Figure 1: Finding fault.




A view of the Lavic Lake seismic fault in California. The Hector Mine earthquake, one of
those considered by Olson and Allen1 in their study of the initial waves of Pacific Rim
earthquakes, occurred along this fault line on 16 October 1999.

High resolution image and legend (206K)


The relationship between the first 4 seconds of an earthquake and its final magnitude implies
either that there is an initial, sub-seismic nucleation phase that is proportional to the size of
the earthquake, or that any triggering cascade of sub-events lasts less than 4 seconds (the
approximate duration of an earthquake of M=6). But these observations of earthquake onsets
are purely empirical, and we are far from understanding how onset, propagation and state of
stress of the surrounding fault interact to determine the final size of a seismic event. Olson
and Allen's study advances that understanding, and thus our ability to predict an earthquake's
size before it reaches its peak. It also raises intriguing questions worthy of further study.

Top of page

References

   1. Olson, E. L. & Allen, R. M. Nature 438, 212–215 (2005). | Article | PubMed |
   2. Ellsworth, W. L. & Beroza, G. C. Science 268, 851–855 (1995). | ISI |
   3. Bakun, W. H. et al. Nature 437, 969–974 (2005). | Article | PubMed | ISI | ChemPort |
   4. Allen, R. M. & Kanamori, H. Science 300, 786–789
      (2003). | Article | PubMed | ISI | ChemPort |
   5. Nakamura, Y. in Proc. 13th World Conf. Earthquake Eng. Pap. No. 908 (2004).
http://www.nature.com/nature/journal/v438/n7065/full/438171a.html



Seismology (from Greek σεισμός, seismos, "earthquake"; and -λόγος, -logia, as a whole
"Talking about earthquakes") is the scientific study of earthquakes and the propagation of
elastic waves through the Earth. The field also includes studies of earthquake effects, such as
tsunamis as well as diverse seismic sources such as volcanic, tectonic, oceanic, atmospheric,
and artificial processes (such as explosions). A related field that uses geology to infer
information regarding past earthquakes is paleoseismology. A recording of earth motion as a
function of time is called a seismogram.

Contents
[hide]

        1 Seismic waves
            o 1.1 Earthquake prediction
        2 Notable seismologists
        3 See also
        4 References



[edit] Seismic waves
Main article: Seismic wave

Earthquakes, and other sources, produce different types of seismic waves which travel
through rock, and provide an effective way to image both sources and structures deep within
the Earth. There are three basic types of seismic waves in solids: P-waves, S-waves (both
body waves) and interface waves. The two basic kinds of surface waves (Rayleigh and Love)
which travel along a solid-air interface, can be fundamentally explained in terms of
interacting P- and/or S-waves.




Propagation of seismic wave in the ground and the effect of presence of land mine.
Pressure waves or Primary waves (P-waves), are longitudinal waves that travel at maximum
velocity within solids and are therefore the first waves to appear on a seismogram.

S-waves, also called shear or secondary waves, are transverse waves that travel more slowly
than P-waves and thus appear later than P-waves on a seismogram. Particle motion is
perpendicular to the direction of wave propagation. Shear waves do not exist in fluids with
essentially no shear strength, such as air or water.

Surface waves travel more slowly than P-waves and S-waves, but because they are guided by
the surface of the Earth (and their energy is thus trapped near the Earth's surface) they can be
much larger in amplitude than body waves, and can be the largest signals seen in earthquake
seismograms. They are particularly strongly excited when the seismic source is close to the
surface of the Earth, such as the case of a shallow earthquake or explosion.

For large enough earthquakes, one can observe the normal modes of the Earth. These modes
are excited as discrete frequencies and can be observed for days after the generating event.
The first observations were made in the 1960s as the advent of higher fidelity instruments
coincided with two of the largest earthquakes of the 20th century - the 1960 Great Chilean
earthquake and the 1964 Great Alaskan earthquake. Since then, the normal modes of the Earth
have given us some of the strongest constraints on the deep structure of the Earth.

One of the earliest important discoveries (suggested by Richard Dixon Oldham in 1906 and
definitively shown by Harold Jeffreys in 1926) was that the outer core of the Earth is liquid.
Pressure waves (P-waves) pass through the core. Transverse or shear waves (S-waves) that
shake side-to-side require rigid material so they do not pass through the outer core. Thus, the
liquid core causes a "shadow" on the side of the planet opposite of the earthquake where no
direct S-waves are observed. The reduction in P-wave velocity of the outer core also causes a
substantial delay for P waves penetrating the core from the (seismically faster velocity)
mantle.

Seismic waves produced by explosions or vibrating controlled sources are one of the primary
methods of underground exploration in geophysics (in addition to many different
electromagnetic methods such as induced polarization and magnetotellurics). Controlled
source seismology has been used to map salt domes, faults, anticlines and other geologic traps
in petroleum-bearing rocks, geological faults, rock types, and long-buried giant meteor
craters. For example, the Chicxulub impactor, which is believed to have killed the dinosaurs,
was localized to Central America by analyzing ejecta in the cretaceous boundary, and then
physically proven to exist using seismic maps from oil exploration.

Using seismic tomography with earthquake waves, the interior of the Earth has been
completely mapped to a resolution of several hundred kilometers. This process has enabled
scientists to identify convection cells, mantle plumes and other large-scale features of the
inner Earth.

Seismographs are instruments that sense and record the motion of the Earth. Networks of
seismographs today continuously monitor the seismic environment of the planet, allowing for
the monitoring and analysis of global earthquakes and tsunami warnings, as well as recording
a variety of seismic signals arising from non-earthquake sources ranging from explosions
(nuclear and chemical), to pressure variations on the ocean floor induced by ocean waves (the
global microseism), to cryospheric events associated with large icebergs and glaciers. Above-
ocean meteor strikes as large as ten kilotons of TNT, (equivalent to about 4.2 × 1013 J of
effective explosive force) have been recorded by seismographs. A major motivation for the
global instrumentation of the Earth with seismographs has been for the monitoring of nuclear
testing.

One of the first attempts at the scientific study of earthquakes followed the 1755 Lisbon
earthquake. Other especially notable earthquakes that spurred major developments in the
science of seismology include the 1857 Basilicata earthquake, 1906 San Francisco
earthquake, the 1964 Alaska earthquake and the 2004 Sumatra-Andaman earthquake. An
extensive list of famous earthquakes can be found on the List of earthquakes page.

[edit] Earthquake prediction
Main article: Earthquake prediction

Forecasting a probable timing, location, magnitude and other important features of a
forthcoming seismic event is called earthquake prediction. Most seismologists do not believe
that a system to provide timely warnings for individual earthquakes has yet been developed,
and many believe that such a system would be unlikely to give significant warning of
impending seismic events. More general forecasts, however, are routinely used to establish
seismic hazard. Such forecasts estimate the probability of an earthquake of a particular size
affecting a particular location within a particular time span and they are routinely used in
earthquake engineering.

Various attempts have been made by seismologists and others to create effective systems for
precise earthquake predictions, including the VAN method. Such methods have yet to be
generally accepted in the seismology community.

http://en.wikipedia.org/wiki/Seismology

Earthquake Alarm

Impending earthquakes have been sending us warning signals--and people are starting to
listen



   reddit
    Slashdot
    Digg
    StumbleUpon
    delicious
    Facebook

PAGE 1234 //VIEW ALL

BY Tom Bleier, Friedemann Freund // December 2005

Deep under Pakistan-administered Kashmir, rocks broke, faults slipped, and the earth shook
with such violence on 8 October that more than 70 000 people died and more than 3 million
were left homeless [see photo, " Devastated"]. But what happened in the weeks and days and
hours leading up to that horrible event? Were there any signs that such devastation was
coming? We think there were, but owing to a satellite malfunction we can't say for sure.

How many lives could have been saved in that one event alone if we'd known of the
earthquake 10 minutes in advance? An hour? A day?

Currently, predictions are vague at best. By studying historical earthquake records,
monitoring the motion of the earth's crust by satellite, and measuring with strain monitors
below the earth's surface, researchers can project a high probability of an earthquake in a
certain area within about 30 years. But short-term earthquake forecasting just hasn't worked.

Accurate short-term forecasts would save lives and enable businesses to recover sooner. With
just a 10-minute warning, trains could move out of tunnels, and people could move to safer
parts of buildings or flee unsafe buildings. With an hour's warning, people could shut off the
water and gas lines coming into their homes and move to safety. In industry, workers could
shut down dangerous processes and back up critical data; those in potentially dangerous
positions, such as refinery employees and high-rise construction workers, could evacuate.
Local government officials could alert emergency-response personnel and move critical
equipment and vehicles outdoors. With a day's warning, people could collect their families
and congregate in a safe location, bringing food, water, and fuel with them. Local and state
governments could place emergency teams and equipment strategically and evacuate bridges
and tunnels.

It seems that earthquakes should be predictable. After all, we can predict hurricanes and
floods using detailed satellite imagery and sophisticated computer models. Using advanced
Doppler radar, we can even tell minutes ahead of time that a tornado will form.

Accurate earthquake warnings are, at last, within reach. They will come not from the
mechanical phenomena--measurements of the movement of the earth's crust--that have been
the focus of decades of study, but, rather, from electromagnetic phenomena. And, remarkably,
these predictions will come from signals gathered not only at the earth's surface but also far
above it, in the ionosphere.

For decades, researchers have detected strange phenomena in the form of odd radio noise
and eerie lights in the sky in the weeks, hours, and days preceding earthquakes. But only
recently have experts started systematically monitoring those phenomena and correlating
them to earthquakes.

A light or glow in the sky sometimes heralds a big earthquake. On 17 January 1995, for
example, there were 23 reported sightings in Kobe, Japan, of a white, blue, or orange light
extending some 200 meters in the air and spreading 1 to 8 kilometers across the ground.
Hours later a 6.9-magnitude earthquake killed more than 5500 people. Sky watchers and
geologists have documented similar lights before earthquakes elsewhere in Japan since the
1960s and in Canada in 1988.

Another sign of an impending quake is a disturbance in the ultralow frequency (ULF) radio
band--1 hertz and below--noticed in the weeks and more dramatically in the hours before an
earthquake. Researchers at Stanford University, in California, documented such signals before
the 1989 Loma Prieta quake, which devastated the San Francisco Bay Area, demolishing
houses, fracturing freeways, and killing 63 people.
Both the lights and the radio waves appear to be electromagnetic disturbances that happen
when crystalline rocks are deformed--or even broken--by the slow grinding of the earth that
occurs just before the dramatic slip that is an earthquake. Although a rock in its normal state
is, of course, an insulator, this cracking creates tremendous electric currents in the ground,
which travel to the surface and into the air.

http://spectrum.ieee.org/computing/hardware/earthquake-alarm/2




            Earthquake prediction: Gone and back again
The 1990s and early 2000s were hard times for earthquake prediction research. ―For 10 years,
there was limited funding in the U.S.,‖ says Dimitar Ouzounov, a research scientist at
NASA‘s Goddard Space Flight Center in Greenbelt, Md., and professor at Chapman
University in Orange, Calif. That changed in 2004, Ouzounov says, after a magnitude-9-plus
quake struck off the coast of Sumatra and set off a tsunami, killing more than 225,000 people
in 11 countries.




A collapsed bridge after the Tangshan, China, earthquake on July 28, 1976, which killed more
than 240,000 people. Although Chinese seismologists "successfully" predicted the Haicheng
quake of 1975, they failed to forecast this one.

USGS

It was a seminal moment in earthquake research, Ouzounov says. ―It actually showed we were
not able to get in advance any information on the biggest event of the last 100 years.‖

Improving that information, according to Ouzounov, should include research on geophysical
anomalies that precede large seismic events — regional, days-long increases in temperature or
alterations in the electromagnetic field — that could one day act as warnings of major
earthquakes.

Many researchers have failed to validate these signals for centuries. Some scientists have
deemed them a useless cause. But after the disaster that befell Sumatra, quake prediction
research has regained demand — and funding.

Historical Intrigue

In Europe, accounts of strange animal behavior before earthquakes go back to ancient Greece.
But such signals didn‘t win serious scientific attention until 1975, when Chinese officials
―successfully‖ predicted a major earthquake near the city of Haicheng, about 550 kilometers
northeast of Beijing.

Based on the surrounding province‘s 2,200-year earthquake history and an array of geological
measurements, China‘s National Earthquake Bureau concluded in 1974 that a major
earthquake would strike the area within the next two years, wrote physical chemist Helmut
Tributsch in ―When the Snakes Awake: Animals and Earthquake Prediction.‖ Determined to
avoid the worst, the bureau not only expanded its measurement network, but trained more
than 100,000 ―honorary observers‖ to spot signs of an impending quake: Animals might leave
their borrows, they were told. Well waters might cloud up and bubble. Lightning might strike
from clear skies. Furthermore, the National Earthquake Bureau maintained, a small
earthquake would strike north of the future ―big one.‖

After a minor quake struck just 70 kilometers north of Haicheng in December 1974, reports of
unusual phenomena began streaming in: ―Geese flew into trees,‖ Tributsch writes. ―Pigs bit at
each other or dug beneath the fences of their sties … Gas bubbles appeared in the pond
water.‖ After a swarm of small quakes struck throughout the region in early February 1975,
officials began evacuating Haicheng in full swing by 2 p.m. on Feb. 4, placing people in
emergency shelters and herding animals from their stables.

Then, around 7:36 that night, the quake hit. It was a magnitude 7.3, large enough to destroy or
severely damage about 50 percent of the buildings in the region. Although the quake‘s
destruction killed 2,041 people, as many as 150,000 deaths and injuries might have occurred
if not for officials‘ well-timed evacuation, according to the U.S. Geological Survey.

Scientific commissions from around the world converged on the scene to investigate the
―successful prediction.‖ Though the Chinese admitted they had little explanation for the
usefulness of animal behavior as earthquake precursors, Tributsch writes that one Chinese
seismologist told a colleague from Caltech that animal observation was ―the best method for
earthquake prediction thus far.‖

American Interest

Following the Haicheng quake, other reports of advances in earthquake prediction appeared,
including some from the former Soviet Union, says John Filson, a geologist emeritus at USGS
in Reston, Va. By 1976, the U.S. National Research Council was convinced, concluding in a
report that with ―the appropriate commitment and level of effort,‖ earthquake prediction ―may
be possible within 10 years in well-instrumented areas.‖

Of course, there were other, more scientific criteria than the animal behavior sightings used in
the Haicheng quake. In a 1973 Science paper, scientists at Columbia University‘s Lamont-
Doherty Geological Observatory in Palisades, N.Y., argued that parameters such as ―tilt, fluid
pressure, electrical and magnetic fields, radon emission, the frequency of occurrence of small
local quakes, and the ratio of the number of small to large shocks‖ could, with enough
diligence, elicit reliable signals of upcoming quakes. They might even explain all those
strange pre-quake phenomena people had reported over the centuries. For example, Tributsch
explained in a 1978 paper in Nature that strongly charged air and particulates had been linked
to ―serotonin irritation syndrome,‖ in mammals, ―fog in air with less than 100 percent
humidity‖ and ―light-producing electrical discharges.‖
By 1977, Filson says, Congress established the National Earthquake Hazards Reduction
Program. Up to half of the new program‘s $30 million budget was to be spent on earthquake
prediction research. It was a scientific free-for-all. ―People were measuring everything from
cockroach activity to radon emissions, seismic patterns to strange electrical currents,‖ recalls
Filson, chief of the USGS Office of Earthquake Studies at the time.

And knowing whether to issue a warning from those parameters was no easy task, Filson says.
―I‘d get these calls when I was at the dinner table saying, ‗John, we‘ve seen this big signal
from our machine or from our data, you‘d better do something.‘ And we‘d talk about it, and
the final comment would be, ‗Well you‘d better do something. What are you going to do?‘ …
It was a terrible position to be in.‖

More often than not, Filson says, those ―signals‖ proved faulty. ―Two or three days would
pass [with no earthquake], and then I‘d call this person back and I‘d say, ‗What happened?‘
And he‘d say, ‗Our capacitor burnt out‘— it was a spurious signal from some electronic
failure.‖

Even the most exacting experiments failed to produce results. In the mid-1980s, scientists
predicted a four-year window within which an earthquake should occur on the San Andreas
Fault near Parkfield, Calif., based on a roughly 22-year recurrence cycle. The quake did not
occur. (One eventually did occur in 2004, although no obvious precursors were observed,
Filson says.) Furthermore, claims of enhanced magnetic field levels prior to the 1989 Loma
Prieta earthquake in San Francisco also faced steep suspicion (and were recently shown to be
the result of a sensor system malfunction according to a 2009 paper in the journal Physics of
the Earth and Planetary Interiors). By the mid-1990s, the National Earthquake Prediction
Evaluation Council had fallen into inactivity. By 1997, a seismologist wrote in a review of
precursor research in Geophysical Journal International that earthquake prediction was
―effectively impossible.‖

Since then, for the most part, scientists have settled for the next best things. Earthquake
hazard assessments, for example, use past earthquake history to estimate the probability that a
quake of a particular magnitude will occur in an area within a given period. And earthquake
early warning systems can, when they work right, give regions up to a minute‘s warning of
approaching shock waves from a faraway quake — enough time to stop a train or shut down
nuclear power plants. ―That‘s an achievement, of course,‖ Ouzounov says. ―But,‖ he adds,
―it‘s limited.‖

Back in the Spotlight

After the 2004 Sumatra earthquake and tsunami, quake prediction was back in demand. In
2006, for example, the National Earthquake Prediction Evaluation Council was re-established.
Ouzounov‘s current research at NASA is also part of that trend. Ouzounov is working with
NASA colleague Patrick Taylor to study thermal anomalies in regions of major earthquakes
from satellite and ground-based data.

That coupling of quakes with atmospheric temperatures is based on gas discharges according
to Ouzounov and his colleagues: Uranium-bearing rocks in Earth‘s crust emit minute amounts
of radon, a colorless, odorless gas that forms from the decay of radium. As faults shift in the
days leading up to an earthquake, they create new openings from which radon can escape.
That excess radon, in turn, releases alpha particles that tinge the atmosphere with ionic charge
and allow the formation of aerosol-sized particles as ions mix with water.
And it is the latent heat from that chain of physical processes, Ouzounov hypothesizes, that
satellites could pick up as thermal anomalies preceding some major earthquakes.

It‘s a verifiable concept based on fundamental principles in atmospheric physics, Ouzounov
says. Formation of aerosol-sized particles could also explain other precursor phenomena, such
as earthquake fog and unusual clouds formation. Ouzounov and his colleagues have even
found Earth radiation anomalies one to two months before the Sumatra earthquake, with a
maximum signal two weeks before the quake.

But, Ouzounov says, such findings by no means authenticate earthquake precursors. ―We
need to show more statistical validation of the results,‖ Ouzounov says. ―We‘d like to show
continuous observation, not just a case-by-case study.‖

And even if precursors were validated statistically, Ouzounov says, such a finding would not
match the level of earthquake prediction envisioned by scientists in the past. ―The approach of
validating the earthquake atmospheric precursors is not very different from the methodology
used in weather forecasting,‖ he says. ―Statistically, the weather forecast for Washington,
D.C., is about 80 percent reliable during the winter.‖ Thermal electromagnetic anomalies, on
the other hand, have not shown nearly that kind of reliability in correlation with major
earthquakes on a global scale. As a result, Ouzounov says that any ―validated‖ precursors
would likely supplement, not replace, current earthquake early warning systems.

In the meantime, research on earthquake precursors has improved quite a bit since the 1970s
and 1980s. Instead of the single sensor that may have incorrectly detected changes in the
magnetic field before the Loma Prieta earthquake, Ouzounov says, geophysicists can now test
research and observations against multiple satellites and ground-based monitoring networks
measuring numerous parameters. And other scientists — though still skeptical — are showing
interest.

Last fall, Ouzounov and colleagues invited seismologists and atmospheric scientists to listen
to the latest developments in earthquake precursor research at the annual meeting of the
American Geophysical Union in San Francisco, Calif. With an international collaboration of
experts working on different aspects of this issue, Ouzounov says, scientists can better
evaluate earthquake anomalies. In short, he says, ―we are on the quest for validation.‖

A Work in Progress

Ouzounov has his work cut out for him. ―If we‘ve learned one thing over the last several
decades,‖ says Michael Blanpied, associate coordinator for the USGS Earthquake Hazards
Program in Reston, Va., and a member of the National Earthquake Prediction Evaluation
Council, it is ―that the kinds of observations that may lead to earthquake prediction are going
to be very subtle. Proof that there is predictability is going to require … a great deal of work
to demonstrate to the scientific community that it‘s true.‖

But the issue of earthquake prediction is more complicated than that, Blanpied says, because
even if geoscientists could declare with high certainty that a large quake was 24 hours away, it
doesn‘t address one big social question: What measures would officials need to take? Twenty-
four hours is still a short amount of time to prepare a highly populated area for an earthquake.
Without a specific plan to evacuate, Blanpied says, mass panic could ensue.
Filson knows this from experience. In 1980, a scientist at the U.S. Bureau of Mines
announced that a major quake would strike Lima, Peru, around June 28, 1981. The National
Earthquake Prediction Evaluation Council determined that the finding was based on
questionable theoretical assumptions, and responded that members of the council were so
confident that a quake would not occur during the proposed four-day window around the date,
that they would have no fear of being in Lima during the prediction window. Filson went to
Peru, where he tried to calm citizens via television and newspaper interviews.

Filson‘s message proved correct; no earthquake hit. But at a dinner at the U.S. embassy, he
discovered his assurances had been in vain. At first, Filson thought the tuna sandwiches
served by the ambassador and his wife were an attempt to save taxpayer money. Then the
ambassador‘s wife revealed that all of the indigenous staff at the embassy, including the
cooks, had left Lima for their hometowns — to die with their families.

No one died, obviously, but the question remains whether earthquake prediction will ever be
correct enough to create solutions and not panic.

http://www.earthmagazine.org/earth/article/1fe-7d9-4-7



How Earthquakes Work
by Tom Harris

Browse the article How Earthquakes Work

Introduction to How Earthquakes Work

                             Natural Disasters Image Gallery




                                  AFP/AFP/Getty Images
  Family members gather at the remains of the collapsed Juyuan middle school, where six
children died in Dujiangyan, in southwest China Sichuan province on May 12, 2008, after an
    earthquake measuring 7.8 rocked the province. See more pictures of natural disasters.
As recently witnessed in China and Iceland, an earthquake is one of the most terrifying
phenomena that nature can whip up. We generally think of the ground we stand on as "rock-
solid" and completely stable. An earthquake can shatter that perception instantly, and often
with extreme violence.

Up until relatively recently, scientists only had unsubstantiated guesses as to what actually
caused earthquakes. Even today there is still a certain amount of mystery surrounding them,
but scientists have a much clearer understanding.

There has been enormous progress in the past century: Scientists have identified the forces
that cause earthquakes, and developed technology that can tell us an earthquake's magnitude
and origin. The next hurdle is to find a way of predicting earthquakes, so they don't catch
people by surprise.

In this article, we'll find out what causes earthquakes, and we'll also find out why they can
have such a devastating effect on us.

Shaking Ground

Up Next

      How Volcanoes Work
      How Hurricanes Work
      Discovery.com: Make a Quake


An earthquake is a vibration that travels through the earth's crust. Technically, a large truck
that rumbles down the street is causing a mini-earthquake, if you feel your house shaking as it
goes by, but we tend to think of earthquakes as events that affect a fairly large area, such as an
entire city. All kinds of things can cause earthquakes:

      volcanic eruptions
      meteor impacts
      underground explosions (an underground nuclear test, for example)
      collapsing structures (such as a collapsing mine)

But the majority of naturally-occurring earthquakes are caused by movements of the earth's
plates, as we'll see in the next section.


                           Your Browser Does Not Support iFrames

Earthquake Facts
Photo courtesy NGDC
Residential damage in Prince William Sound, Alaska, due to liquefaction caused by a 1964
9.2-magnitude earthquake.

We only hear about earthquakes in the news every once in a while, but they are actually an
everyday occurrence on our planet. According to the United States Geological Survey, more
than three million earthquakes occur every year. That's about 8,000 a day, or one every 11
seconds!

The vast majority of these 3 million quakes are extremely weak. The law of probability also
causes a good number of stronger quakes to happen in uninhabited places where no one feels
them. It is the big quakes that occur in highly populated areas that get our attention.

Earthquakes have caused a great deal of property damage over the years, and they have
claimed many lives. In the last hundred years alone, there have been more than 1.5 million
earthquake-related fatalities. Usually, it's not the shaking ground itself that claims lives -- it's
the associated destruction of man-made structures and the instigation of other natural
disasters, such as tsunamis, avalanches and landslides.

In the next section, we'll examine the powerful forces that cause this intense trembling and
find out why earthquakes occur much more often in certain regions.

Plate Tectonics
Hulton Collection/Getty Images
The basic theory of plate tectonics is that the surface layer of the earth -- the lithosphere -- is
comprised of many plates that slide over the lubricating athenosphere layer.

The biggest scientific breakthrough in the history of seismology -- the study of earthquakes --
came in the middle of the 20th century, with the development of the theory of plate tectonics.
Scientists proposed the idea of plate tectonics to explain a number of peculiar phenomenon on
earth, such as the apparent movement of continents over time, the clustering of volcanic
activity in certain areas and the presence of huge ridges at the bottom of the ocean.

The basic theory is that the surface layer of the earth -- the lithosphere -- is comprised of
many plates that slide over the lubricating athenosphere layer. At the boundaries between
these huge plates of soil and rock, three different things can happen:

       Plates can move apart - If two plates are moving apart from each other, hot, molten
        rock flows up from the layers of mantle below the lithosphere. This magma comes out
        on the surface (mostly at the bottom of the ocean), where it is called lava. As the lava
        cools, it hardens to form new lithosphere material, filling in the gap. This is called a
        divergent plate boundary.
       Plates can push together - If the two plates are moving toward each other, one plate
        typically pushes under the other one. This subducting plate sinks into the lower
        mantle layers, where it melts. At some boundaries where two plates meet, neither plate
        is in a position to subduct under the other, so they both push against each other to form
        mountains. The lines where plates push toward each other are called convergent plate
        boundaries.
       Plates slide against each other - At other boundaries, plates simply slide by each
        other -- one moves north and one moves south, for example. While the plates don't
        drift directly into each other at these transform boundaries, they are pushed tightly
        together. A great deal of tension builds at the boundary.

Where these plates meet, you'll find faults -- breaks in the earth's crust where the blocks of
rock on each side are moving in different directions. Earthquakes are much more common
along fault lines than they are anywhere else on the planet.

In the next section, we'll look at some different types of faults and see how their movement
creates earthquakes.
Faults

Scientists identify four types of faults, characterized by the position of the fault plane, the
break in the rock and the movement of the two rock blocks:

        In a normal fault (see animation below), the fault plane is nearly vertical. The
         hanging wall, the block of rock positioned above the plane, pushes down across the
         footwall, which is the block of rock below the plane. The footwall, in turn, pushes up
         against the hanging wall. These faults occur where the crust is being pulled apart, due
         to the pull of a divergent plate boundary.




                                                                     Your browser does not support
                                     JavaScript or it is disabled.
                                           Normal fault

        The fault plane in a reverse fault is also nearly vertical, but the hanging wall pushes
         up and the footwall pushes down. This sort of fault forms where a plate is being
         compressed.
        A thrust fault moves the same way as a reverse fault, but the fault line is nearly
         horizontal. In these faults, which are also caused by compression, the rock of the
         hanging wall is actually pushed up on top of the footwall. This is the sort of fault that
         occurs in a converging plate boundary.
                                                                     Your browser does not support
                                     JavaScript or it is disabled.
                                           Reverse fault

      In a strike-slip fault, the blocks of rock move in opposite horizontal directions. These
       faults form when the crust pieces are sliding against each other, as in a transform plate
       boundary




                                                                     Your browser does not support
                                     JavaScript or it is disabled.
                                          Strike-slip fault

In all of these types of faults, the different blocks of rock push very tightly together, creating a
good deal of friction as they move. If this friction level is high enough, the two blocks become
locked -- the friction keeps them from sliding against each other. When this happens, the
forces in the plates continue to push the rock, increasing the pressure applied at the fault.

If the pressure increases to a high enough level, then it will overcome the force of the friction,
and the blocks will suddenly snap forward. To put it another way, as the tectonic forces push
on the "locked" blocks, potential energy builds. When the plates are finally moved, this built-
up energy becomes kinetic. Some fault shifts create visible changes at the earth's surface, but
other shifts occur in rock well under the surface, and so don't create a surface rupture.




                                            Photo courtesy USGS
                   Crop rows offset by a lateral strike slip fault shifting in the 1976
                          earthquake that shook El Progresso, Guatemala.

The initial break that creates a fault, along with these sudden, intense shifts along already
formed faults, are the main sources of earthquakes. Most earthquakes occur around plate
boundaries, because this is where the strain from the plate movements is felt most intensely,
creating fault zones, groups of interconnected faults. In a fault zone, the release of kinetic
energy at one fault may increase the stress -- the potential energy -- in a nearby fault, leading
to other earthquakes. This is one of the reasons that several earthquakes may occur in an area
in a short period of time.
                                         Photo courtesy USGS
                     Railroad tracks shifted by the 1976 Guatemala earthquake

Every now and then, earthquakes do occur in the middle of plates. In fact, one of the most
powerful series of earthquakes ever recorded in the United States occurred in the middle of
the North American continental plate. These earthquakes, which shook several states in 1811
and 1812, originated in Missouri. In the 1970s, scientists found the likely source of this
earthquake: a 600-million-year-old fault zone buried under many layers of rock.

The vibrations of one earthquake in this series were so powerful that they actually rang church
bells as far away as Boston! In the next section, we'll examine earthquake vibrations and see
how they travel through the ground.

Seismic Waves
Photo courtesy USGS
Structural damage caused by vibrations from
the 1964 Alaska earthquake

When a sudden break or shift occurs in the earth's crust, the energy radiates out as seismic
waves, just as the energy from a disturbance in a body of water radiates out in wave form. In
every earthquake, there are several different types of seismic waves.

Body waves move through the inner part of the earth, while surface waves travel over the
surface of the earth. Surface waves -- sometimes called long waves, or simply L waves -- are
responsible for most of the damage associated with earthquakes, because they cause the most
intense vibrations. Surface waves stem from body waves that reach the surface.

There are two main types of body waves.

      Primary waves, also called P waves or compressional waves, travel about 1 to 5
       miles per second (1.6 to 8 kps), depending on the material they're moving through.
       This speed is greater than the speed of other waves, so P waves arrive first at any
       surface location. They can travel through solid, liquid and gas, and so will pass
       completely through the body of the earth. As they travel through rock, the waves move
       tiny rock particles back and forth -- pushing them apart and then back together -- in
       line with the direction the wave is traveling. These waves typically arrive at the
       surface as an abrupt thud.
      Secondary waves, also called S waves or shear waves, lag a little behind the P
       waves. As these waves move, they displace rock particles outward, pushing them
       perpendicular to the path of the waves. This results in the first period of rolling
       associated with earthquakes. Unlike P waves, S waves don't move straight through the
       earth. They only travel through solid material, and so are stopped at the liquid layer in
       the earth's core.
                                                                      Your browser does not support
                                     JavaScript or it is disabled.
                           Click the play button to start the earthquake.
                           When P and S waves reach the earth's surface,
                           they form L waves. The most intense L waves
                                   radiate out from the epicenter.

Both sorts of body waves do travel around the earth, however, and can be detected on the
opposite side of the planet from the point where the earthquake began. At any given moment,
there are a number of very faint seismic waves moving all around the planet.

Surface waves are something like the waves in a body of water -- they move the surface of the
earth up and down. This generally causes the worst damage because the wave motion rocks
the foundations of manmade structures. L waves are the slowest moving of all waves, so the
most intense shaking usually comes at the end of an earthquake.

In the next section, we'll see how scientists can calculate the origin of an earthquake by
detecting these different waves.

Seismology




                                                                            Photo courtesy USGS
We saw in the last section that there are three different types  A fence along a strike slip fault
of seismic waves, and that these waves travel at different         that shifted in the 1906 San
speeds. While the exact speed of P and S waves varies                Francisco earthquake.
depending on the composition of the material they're traveling through, the ratio between the
speeds of the two waves will remain relatively constant in any earthquake. P waves generally
travel 1.7 times faster than S waves.

Using this ratio, scientists can calculate the distance between any point on the earth's surface
and the earthquake's focus, the breaking point where the vibrations originated. They do this
with a seismograph, a machine that registers the different waves. To find the distance between
the seismograph and the focus, scientists also need to know the time the vibrations arrived.
With this information, they simply note how much time passed between the arrival of both
waves and then check a special chart that tells them the distance the waves must have traveled
based on that delay.

If you gather this information from three or more points, you can figure out the location of the
focus through the process of trilateration. Basically, you draw an imaginary sphere around
each seismograph location, with the point of measurement as the center and the measured
distance (let's call it X) from that point to the focus as the radius. The surface of the circle
describes all the points that are X miles away from the seismograph. The focus, then, must be
somewhere along this sphere. If you come up with two spheres, based on evidence from two
different seismographs, you'll get a two-dimensional circle where they meet. Since the focus
must be along the surface of both spheres, all of the possible focus points are located on the
circle formed by the intersection of these two spheres. A third sphere will intersect only twice
with this circle, giving you two possible focus points. And because the center of each sphere
is on the earth's surface, one of these possible points will be in the air, leaving only one
logical focus location.

For a more thorough discussion of trilateral calculation, check out How GPS Receivers Work.

Richter Scale




Photo courtesy NGDC
Destruction caused by a (Richter) magnitude 6.6 earthquake in Caracas, Venezuela. The 1967
earthquake took 240 lives and caused more than $50 million worth of property damage.
Whenever a major earthquake is in the news, you'll probably hear about its Richter Scale
rating. You might also hear about its Mercalli Scale rating, though this isn't discussed as
often. These two ratings describe the power of the earthquake from two different perspectives.

The Richter Scale is used to rate the magnitude of an earthquake -- the amount of energy it
released. This is calculated using information gathered by a seismograph. The Richter Scale is
logarithmic, meaning that whole-number jumps indicate a tenfold increase. In this case, the
increase is in wave amplitude. That is, the wave amplitude in a level 6 earthquake is 10 times
greater than in a level 5 earthquake, and the amplitude increases 100 times between a level 7
earthquake and a level 9 earthquake. The amount of energy released increases 31.7 times
between whole number values.

The largest earthquake on record registered an 9.5 on the currently used Richter Scale, though
there have certainly been stronger quakes in Earth's history. The majority of earthquakes
register less than 3 on the Richter Scale. These tremors, which aren't usually felt by humans,
are called microquakes. Generally, you won't see much damage from earthquakes that rate
below 4 on the Richter Scale. Major earthquakes generally register at 7 or above.

Liquefaction
In some areas, severe earthquake damage is the result of liquefaction of soil. In the right conditions,
the violent shaking from an earthquake will make loosely packed sediments and soil behave like a
liquid. When a building or house is built on this type of sediment, liquefaction will cause the structure to
collapse more easily. Highly developed areas built on loose ground material can suffer severe damage
from even a relatively mild earthquake. Liquefaction can also cause severe mudslides, like the ones
that took so many lives in the recent earthquake that shook Central America. In this case, in fact,
mudslides were the most significant destructive force, claiming hundreds of lives.

Richter ratings only give you a rough idea of the actual impact of an earthquake. As we've
seen, an earthquake's destructive power varies depending on the composition of the ground in
an area and the design and placement of manmade structures. The extent of damage is rated
on the Mercalli Scale. Mercalli ratings, which are given as Roman numerals, are based on
largely subjective interpretations. A low intensity earthquake, one in which only some people
feel the vibration and there is no significant property damage, is rated as a II. The highest
rating, a XII, is applied only to earthquakes in which structures are destroyed, the ground is
cracked and other natural disasters, such as landslides or Tsunamis, are initiated.

Richter Scale ratings are determined soon after an earthquake, once scientists can compare the
data from different seismograph stations. Mercalli ratings, on the other hand, can't be
determined until investigators have had time to talk to many eyewitnesses to find out what
occurred during the earthquake. Once they have a good idea of the range of damage, they use
the Mercalli criteria to decide on an appropriate rating.

Predicting Earthquakes
Photo courtesy USGS
Damage in downtown Anchorage, Alaska, caused by the 1964 Prince William Sound
earthquake.

We understand earthquakes a lot better than we did even 50 years ago, but we still can't do
much about them. They are caused by fundamental, powerful geological processes that are far
beyond our control. These processes are also fairly unpredictable, so it's not possible at this
time to tell people exactly when an earthquake is going to occur. The first detected seismic
waves will tell us that more powerful vibrations are on their way, but this only gives us a few
minutes warning, at most.

Scientists can say where major earthquakes are likely to occur, based on the movement of the
plates in the earth and the location of fault zones. They can also make general guesses of
when they might occur in a certain area, by looking at the history of earthquakes in the region
and detecting where pressure is building along fault lines. These predictions are extremely
vague, however -- typically on the order of decades. Scientists have had more success
predicting aftershocks, additional quakes following an initial earthquake. These predictions
are based on extensive research of aftershock patterns. Seismologists can make a good guess
of how an earthquake originating along one fault will cause additional earthquakes in
connected faults.

Another area of study is the relationship between magnetic and electrical charges in rock
material and earthquakes. Some scientists have hypothesized that these electromagnetic fields
change in a certain way just before an earthquake. Seismologists are also studying gas
seepage and the tilting of the ground as warning signs of earthquakes. For the most part,
however, they can't reliably predict earthquakes with any precision.

So what can we do about earthquakes? The major advances over the past 50 years have been
in preparedness -- particularly in the field of construction engineering. In 1973, the Uniform
Building Code, an international set of standards for building construction, added
specifications to fortify buildings against the force of seismic waves. This includes
strengthening support material as well as designing buildings so they are flexible enough to
absorb vibrations without falling or deteriorating. It's very important to design structures that
can take this sort of punch, particularly in earthquake-prone areas. See this article on How
Smart Structures Will Work for more on how scientists are creating new ways to protect
buildings from seismic activity.
                                           Photo courtesy USGS
                                  Bridge columns cracked by the
                               Loma Prieta, Calif. earthquake of 1989.



Another component of preparedness is educating the public. The United States Geological
Survey (USGS) and other government agencies have produced several brochures explaining
the processes involved in an earthquake and giving instructions on how to prepare your house
for a possible earthquake, as well as what to do when a quake hits.




                                           Photo courtesy USGS
                   The great San Francisco fire of 1906 was initiated by a powerful
                    earthquake. The earthquake vibrations and catastrophic fire
                                     destroyed most of the city,
                                  leaving 250,000 people homeless.



In the future, improvements in prediction and preparedness should further minimize the loss
of life and property associated with earthquakes. But it will be a long time, if ever, before
we'll be ready for every substantial earthquake that might occur. Just like severe weather and
disease, earthquakes are an unavoidable force generated by the powerful natural processes
that shape our planet. All we can do is increase our understanding of the phenomenon and
develop better ways to deal with it. To learn more about earthquakes, check out the USGS
Web site, or any of the other sites listed in the Links section.

http://science.howstuffworks.com/earthquake.htm/printable
Earthquake History




Significant earthquakes have occurred on the Parkfield section of the San Andreas fault at
fairly regular intervals - in 1857, 1881, 1901, 1922, 1934 and 1966. The next significant
earthquake was anticipated to take place within the time frame 1988 to 1993.

Moderate-size earthquakes have occurred on the Parkfield section of the San Andreas fault at
fairly regular intervals - in 1857, 1881, 1901, 1922, 1934 and 1966. While little is known
about the first three shocks, available data suggest that all six earthquakes may have been
"characteristic" in the sense that they occurred with some regularity (mean repetition time of
about 22 year) and may have repeatedly ruptured the same area on the fault. (see Bakun and
Lindh, 1985 and Bakun and McEvilly, 1979)

The similarity of waveforms recorded in the 1922, 1934 and 1966 events, shown below, is
possible only if the ruptured area of the fault is virtually the same for all three events.




Recordings of the east-west component of motion made by Galitzin instruments at DeBilt, the
Netherlands. Recordings from the 1922 earthquake (shown in black) and the 1934 and 1966
events at Parkfield (shown in red) are strikingly similar, suggesting virtually identical
ruptures.

http://earthquake.usgs.gov/research/parkfield/hist.php
Physics of the Zero-Length Spring of                                               Department of Physics
                                                                                   Mercer University
Geoscience                                                                         Macon, Georgia 31207
Randall D. Peters
Abstract: The physics behind the LeCoste zero-length spring is described. An elegant device which was
patented in the 1930's, this spring is still used in the popular LeCoste Romberg gravimeter.


1 Introduction
The sensitivity of a simple harmonic oscillator (SHO) to external acceleration (forcing) is
proportional to the square of the natural period of the oscillator. High sensitivity from a
vertically oriented, simple spring/mass system, would require an inordinately long (weak)
spring. Not only would such a seismometer be unwieldy; but it would be afflicted with
unnecessary noises. A common solution to this problem is to incline the spring away from
vertical, such as in the Press-Ewing vertical seismometer, illustrated in Fig. 1.




Assuming no damping, no drive, and a weightless (rigid) boom and spring; the equation of
motion is given by

                                                           ..
                    mgb cos - bk (L - L0) sin ( -) = mb       2
                                                                                      (1)


where g is the magnitude of the gravitational field, k is the spring constant; the unstretched
length of the spring is L0 , and the angles are as indicated.

In analyzing this system, it is convenient to employ the law of sines.
                     D/sin(-) = L/sin(/2+) = L/cos                       (2)

An equilibrium relation is found by setting [()\ddot]  0 , in Eq. (1), from which

                                               L0
                              mg = k D ( 1 -         )            (3)
                                                Le

where Le is the length of the spring at equilibrium (when y = 0 ).

For small motions, 1/L  [1/( Le)] ( 1 -[(L)/( Le)] ) with L  y sin e . Thus the equation
of motion for the system becomes

                            ..     L0
                          m y + (k    sin2e) y = 0                     (4)
                                   Le

Since the parenthetic expression multiplying y involves the angular frequency of the simple
harmonic motion (02 ), it is seen that the period of the motion is inversely proportional to
 sin e . In principle, one could therefore extend the period, T , without limit as D 0 (since
 sin e = D/Le ). It is clear from the figure, however, that this tends toward an equilibrium
which at best is approximately neutral; and which at worst is unstable. In practice, it is very
difficult to work with such a system having a period T  30 s because of material
limitations associated with anelasticity.

An ordinary spring does not perform well in the Fig. 1 configuration. Many conventional
seismometers have thus utilized the so-called ``zero-length" spring, invented by La Coste and
patented in the 1930's. Although designed originally for use in gravimeters (La Coste
Romberg types being very popular), the spring has found widespread use in vertical
seismometers of the conventional (non force-balance) type.

From Eq. (4), it is seen that T can be lengthened by letting L0  0 , even though e =
 constant . Working with L0 near zero and e  /4 has been a popular combination, widely
used in the World Wide Standard Network.

What is meant by the term, ``zero-length"? If there were no constraint due to the finite wire
diameter of such a helical spring, it would have zero length in the unstretched condition. To
achieve this, a twist is introduced into the wire as it is coiled, in the manufacturing process.
This works because a coiled spring ``unwinds" as it is stretched. An appreciation for this
phenomenon can be realized by noting that the dominant modulus, insofar as the spring
constant is determined, is that of the shear modulus.

To appreciate the value of the zero-length spring from a stability standpoint, consider the
equation of motion, Eq. 4. Although this equation implies infinite period for L0 = 0, it was
derived under the assumption of very small motion. A treatment of the problem to allow
larger vertical motion of m would show that it's impossible to have the effective spring
constant keff = kL0 sin2e/Le to be identically zero. Nevertheless, keff is small because of L0
being small, even though sin e is large (near /4). For a non-zero length spring, the only way
keff can be made small is by forcing sine to a small value. In so doing, one moves in the
direction of instability, as is obvious from figure 1. Small changes in the spring characteristic,
due to thermal change or anelastic change-give rise to large changes in y.

2 Modern Seismometers
The modern force-balance seismometer does not utilize a zero-length spring. It relies on
electronic feedback to ``soften'' a strong spring. Thus keff is made small by proper phase and
amplitude feedback to a transducer whose adjustable upward force augments typically a
``hard'' leaf spring. It is tempting to believe that the force-balance technique has somehow
eliminated the problems of anelasticity that plague the non zero-length spring of a
conventional type. First, let us note, that it is not possible to correct a poor mechanical design
by electronics means. In other words, problems of anelasticity (small amplitude, long period
nonlinearity) must still be present in such an instrument-they've just been camouflaged.

It is the author's belief that the use of a zero-length spring as the basis for a force-balance
instrument would be superior to the systems presently in use; i.e., build a vertical seismometer
(large mass unit) employing the same principle as those of the modern LaCoste Romberg
gravimeter (which the author understands to use a small zero-length quartz spring and force-
balancing by means of a capacitive transducer). Of course the large force of feedback must be
accomplished by means of magnetic force, for the large mass. It should be noted that the
magnetic force is not devoid of anelastic (nonlinear) problems, through the Barkhausen effect.

				
DOCUMENT INFO