331

Document Sample
331 Powered By Docstoc
					                                       Understanding Our Planet Through Chemistry

.

This U.S.Geological Survey site shows how chemists and geologists use analytical chemistry to:
determine the age of the Earth; show that an extraterrestrial body collided with the Earth; predict volcanic
eruptions; observe atmospheric change over millions of years; and document damage by acid rain and
pollution of the Earth's surface.

.

INDEX

Foreword

I. Introduction

II. Understanding the Earth

IIa. History recorded in chemistry. How old is the Earth?

III. Mapping the chemistry of the Earth's surface

IIIa. Assessment of public lands

IV. Can we depend on chemical analyses?

IVa. Measuring quality



FOREWORD

This document describes the role of chemistry in issues vital to our economy, health, and well-being.
When we are analyzing a sample of the Earth, we never ask if a specific element is present. Virtually
every sample of the Earth contains every natural element at some amount. The more appropriate
questions are: How much of it is present? Is there enough to be mined profitably? In the environment, is it
dangerous at this level or in this form? And after we've identified the issues that we need to solve about
our planet, we then need to ask, What clues can we find that will give us the answer?

We will show you how many geologic problems are solved using routine analyses of the major
components of rocks. We will also show you the complexity of analyzing trace amounts of common
components in extremely small samples, such as rare samples of air from more than 100 million years
ago, tiny samples of ore-forming fluids that were entombed in minerals 300 million years ago, or small
amounts of naturally-occurring radioactive isotopes that are as old as the Earth. Because some elements
in our environment are hazardous at trace levels, they must be analyzed down to those low levels. The
impact of quality control on analyses will also be discussed, as well as the production of standard
reference materials that are distributed internationally to Federal and private laboratories.

As the primary Federal Earth-Science Agency, the USGS studies and provides solutions to questions
concerning our planet, assesses the mineral resources of Federal lands, and serves as a repository for
geochemical data generated by numerous Federal programs. These data are being applied to new
economic and environmental concerns and provide a cost effective method to solve geochemical
problems, often with no impact on wilderness or fragile refuges.

---------------------------------------------------------

I. Introduction

Questions about geology -the science of the Earth- can be difficult to answer because many times we
can't safely get close enough to the event. Even if we can, our senses are not sharp enough to detect
everything that is happening. The Earth is relentless in its course of change, but the transformation
occurs over a vast amount of time. Some geologic processes can take a million years or more to
complete. We know that today's events have also occurred repeatedly throughout geologic time. To
understand our planet Earth, we need to read and interpret the permanent records in the Earth's crust
and interior. These records are the key to the future, and many of these clues are preserved in the
chemistry of geologic samples.

Everything we touch in our daily lives is made up of elements. There are 92 elements that occur naturally,
and in most cases, the human senses cannot recognize these elements when they are present in a
compound. If, for example, we could always recognize what something is made of, there would be no
such thing as "fool's gold" (a natural combination or iron and sulfur called pyrite). Because we have
difficulty identifying these relatively pure compounds, it's not surprising that when rock or soil contains
only a very small amount of an element we are incapable of recognizing the element's presence.

Using only our vision, pyrite is easily confused with gold, so much so that the common name for pyrite is
"fools gold."

Using analytical chemistry, we can even determine trace elements (elements present at very low levels)
at the parts per million (ppm) or parts per billion (ppb) level. It's difficult to comprehend the concentration
of a substance at this low a level. To get a mental picture, imagine an average 3-bedroom home. It would
take about 1 million marbles to cover the floors of the home. One part per million would be represented by
just one marble among all the other marbles. For that same marble to represent one part per billion,
however, it would take 20 football fields covered with marbles.

Different elements have different physical properties. These properties determine what methods can be
used to analyze each element (or group of elements). The methods described in the WWW document
can be applied to many different geological problems, but no one method can solve every problem. The
analytical methods described here are only a few that were selected to show the role of chemistry in
geology.

---------------------------------------------------------

II. Understanding the Earth

IIa. History recorded in chemistry

How old is the Earth?

The question of when the Earth was formed and when various events on it occurred has long fascinated
humanity. In the past, various estimates of the age of the Earth have been made using the available
technology. All estimates of this type changed drastically with the modern application of
radioactivity.Return to this point in index.

Elements, isotopes, and radioactivity

Matter is made up of atoms, and atoms are made up of a complex array of subatomic particles. Let's
consider only three of these particles: protons (positively charged), neutrons (no charge), and electrons
(negatively charged). Every element has a fixed number of protons that cannot be changed without
creating a different element. If, for example, we add a proton to an atom of sulfur, it becomes heavier and
is now an atom of chlorine. If we change the number of neutrons in an atom, however, it has almost no
effect on the chemical properties and outward appearance but does have an effect on the atomic mass. It
can also have an extreme effect on the atomic stability of the element. If we take an atom of potassium
that has 39 neutrons in it and add one more, the atom now becomes unstable and can radioactively
decay.

Each combination of an element with a different number of neutrons is called an isotope. Isotopes that
are radioactive disintegrate or decay in a predictable way and at a specific rate to make other isotopes.
The radioactive isotope is called the parent, and the isotope formed by the decay is called the daughter. A
radioactive isotope decays at a constant rate proportional to the number of radioactive atoms remaining.
A simple way of describing the speed of decay is to see the time it takes for half of the atoms of a
radioactive parent to decay and form the daughter element(s). This is called the half life. Various events
(especially melting of the rock) will cause the isotopes in a rock to redistribute. When the rock solidifies it
can be thought of as starting a stopwatch. By determining the amount of the parent and daughter
isotopes present scientists can determine when the stopwatch started.

Naturally occuring radioactive isotopes (called the parent isotope) disintegrate at specific rates to make
other isotopes (called daughter isotopes). The amount of time it takes for half the quantity of the original
isotope to decay is constant, no matter how much as present at the beginning. Based on this principle,
the age of geologic events can be measured.

As an example, the parent-daughter system used to determine the age of the Earth is the uranium-lead
system. The decay of the parent uranium isotopes to daughter lead isotopes in samples of the Earth,
Moon, and meteorites indicates that all the planets in our solar system formed 4.5 billion years ago.

While determining the age of the Earth is intriguing, radiometric dating has recently been useful in more
practical issues like the following: With what age of granite formation are ore deposits in a particular
region associated? How recently has a fault been active, and is it likely to be safe to build near it now?
How often does a volcano erupt and how often do landslides recur?

On May 18, 1980 the Cascade volcano, Mt. St. Helens, erupted exposively causing a great deal of
destruction and a number of deaths.

In addition to telling us the Earth is 4.5 billion years old, geologic dating can answer important questions
such as: When was the last time a fault moved? What areas are a safe place to construct a nuclear
reactor? How frequently does a particular volcano erupt? Is a volcano nearing an eruptive part of its
cycle?

Because different isotopes of an element have different masses, they can be viewed as an arrangement
of masses in a spectrum. An instrument that separates and electronically measures a spectra of atomic
masses is called a mass spectrometer. There are many types of mass spectrometers, but the most
frequently used in earth-science age determinations are magnetic sector mass spectrometers. These
magnetic spectrometers operate on the principle that if you put an electric charge on an object and throw
it into a magnetic field, the object s path will form a circle. The radius of the circle will depend on the
strength of the magnetic field and the mass of the charged atom divided by its electric charge. Thus, if
you have a purified portion of an element from a sample with several isotopes, each can be made, in
sequence, to travel the same circular path to the detector by varying the strength of the magnetic field.
Magnetic sector mass spectrometers consist of at least three components as illustrated in this figure. (1)
A source of sample ions, (2) a magnetic field, and (3) a detector.

The atoms on the filament are ionized and accelerated at a specific velocity through a magnetic field,
causing them to take a specific curved path depending on the ion's mass. This type of mass spectrometer
scheme most commonly used in geologic dating shows how ions with a specific mass are directed into
the collector for counting, while others, like a race car taking the curves at the wrong speed, are lost.

A difficult chemical procedure is used to concentrate the element of interest so that isotopes can be
measured on a mass spectrometer. In many cases the recovered amount is no larger than a spot on the
sample filament and could pass through the eye of a needle. Return to this point in index.

Digesting rocks

But how do you take a rock and purify a portion of it for mass spectrometry, and how do you analyze a
rock sample on an instrument that only analyzes liquids? In most cases, before a rock s chemical
composition can be determined, it must pass through both a physical and a chemical preparation to free
the element(s) of interest from the rock and present them in a dissolved or liquid form.

Initially, fist-size pieces of rock are broken down to pea-size fragments using a crusher with steel jaws. A
pulverizer grinds this coarse material into a powder as fine as flour.

Next, the powder is further broken down, or decomposed, by using either an acid treatment or fusion.
During this chemical decomposition, the weighed sample of powdered rock releases its elements into
solution.

Because most rocks are composed of a combination of many types of minerals, each having different
chemical and physical properties, digestion is accomplished by using a combination of acids. Most
commonly used is a mixture of hydrofluoric, nitric, hydrochloric, and perchloric acids, which will
decompose all but the most resistant minerals. The acids are heated with the sample powder in Teflon
containers, on a hot plate, or in a specially designed microwave oven.

In the fusion technique, a powdered inorganic reagent (known as a flux) is mixed with the rock powder
and heated above the melting point of the flux; the molten flux then attacks the sample and decomposes
it into a uniform melt. The melt may then be poured into a mold and cooled for methods that require a
uniform solid such as X-ray fluorescence spectrometry (scroll down to picture of arm pouring red hot
samples for a discussion of XRF) or dissolved in a diluted acid to create a liquid solution. The higher
temperatures (500 to 1,200 C) and caustic nature of the molten chemicals used for fusions increases the
efficiency of the decomposition as compared to acid techniques and renders most minerals soluble. Each
form of sample decomposition, acid or flux, has its advantages and disadvantages that must be
considered. In addition, the importance of safety and simplicity must not be ignored.Return to this point in
index.

Disaster from space

One of the mysteries of the history of the earth is the layer of clay that was deposited around the entire
globe 65 million years ago. The layer marks the K-T boundary the end of the Cretaceous and beginning
of the Tertiary periods. It is best known as the time when not only the dinosaurs but nearly half of all life
forms became extinct.

Chemical evidence in this layer of clay preserved from 65 million years ago in Caravaca, Spain, indicates
an asteroid or comet struck the Earth at up to 170 times the speed of sound, possibly causing a disaster
resulting in the extinction of half of all life forms, including the dinosaurs.

At the beginning of the last decade, Nobel Laureate Luis Alvarez and his team members discovered a 9
ppb abundance of the element iridium while using neutron activation analysis to study 1-cm-thick samples
at the K-T boundary layer. The fact that the high level of iridium coincided exactly with the classic end of
the Cretaceous mass extinction event led them to propose a theory linking these two observations. They
theorized that an asteroid between 6 and 14 km in diameter struck the Earth, and the impact lofted
enormous amounts of pulverized target material high into the Earth s atmosphere. They speculated that
this dust- size, impact ejecta caused an environmental catastrophe.

Under a microscope, these quartz grains show lines that are characteristic of high shock and are found
only with meteorite impacts or atomic explosions. This 1/3 millimeter grain is from the K-T boundary clay
at Teapot Dome, Wyoming.

Additional research by other scientists suggests that if the extraterrestrial object was an asteroid, it most
likely impacted the Earth at a velocity of 50 times the speed of sound and measured 15 km in diameter.
Because asteroids of this size are very few in number in our solar system, the object could also have
been a comet, most likely moving even faster, possibly 170 times the speed of sound but measuring only
10 km in diameter.

To test the impact theories, we have applied a new analytical technique called laser ablation, inductively
coupled plasma, quadrupole mass spectrometry (LA-ICP-QMS). To allow efficient, rapid, spatial
sampling, a laser is used. The technique is highly sensitive for almost all elements.

As depicted below, the energy of the laser is focused onto a spot about 80 micrometers in diameter
(slightly more than the diameter of a human hair) to vaporize and sputter material from small zones of the
sample. The operating conditions of the laser range from 1 million to 1 trillion watts per square centimeter.
This incredibly high energy density is created when the energy is packed into small bursts of 160
microseconds, which are then focused with a lens onto a very small spot.

A laser ablation, induction coupled plasma, quadrupole mass spectrometer vaporizes a small spot on the
sample. The vapor is then ionized in the plasma. The four charged rods (the quadrupole) then cause only
the appropriate ions to arrive at the detector for counting; all others are lost. By changing the electric
charge on the rods, different elements can be determined.

The vapor from the sample is then carried by a stream of argon gas into a 7,000 C argon plasma, where
the vapor is ionized. These ions are then drawn into a quadrupole mass spectrometer (QMS). The QMS
consists of two sets of electrically charged, machined rods. A radio-frequency signal is applied to both
sets of rods. Under specific operating conditions, one unique, mass-to-charge ratio of ions will be directed
down the opening between the four rods and exit to the detector. All other ions will be lost.

Based on these 250- micron-wide, black laser trails across the brown layer of clay from the K-T boundary
in Caravaca, Spain, the quadrupole mass spectrometer found abnormally high abundances of platinum-
group elements (up to 1,000 ppb), most likely coming from an extraterrestrial source.
The LA-ICP-MS is sensitive for all the platinum group elements (PGEs) that would appear from an
asteroid impact. The laser, which has fine sampling resolution, was used to sample the 1-cm layer
analyzed by Alvarez and coworkers but in bands only 0.25-mm thick. In this way, we were able to sample
just the layer of PGE-enriched material and found the concentration in this zone to be nearly 1 ppm, a
factor of 100 times higher than that previously reported. This greater concentration of the PGEs gives
additional support to the theory that an extraterrestrial body collided with the Earth 65 million years ago.
Return to this point in index.

IIb. Geologic processes

Volcanoes

Volcanoes erupt when molten rock (magma) deep in the Earth s interior makes its way to the surface. On
average, for every cubic kilometer of magma erupted from a volcano, 3 to 10 cubic kilometers are stored
beneath the surface in shallow reservoirs called magma chambers.

We can see what these magma chambers look like by studying ancient reservoirs that have solidified and
been exposed by erosion. One of these is Half Dome in Yosemite National Park.

Half Dome, in Yosemite Park, is the remains of a magma chamber that cooled slowly and crystallized
beneath the Earth's surface. The solidified magma chamber was then exposed and cut in half by erosion.
Similar, still molten magma chambers are thought to underlie many active volcanoes.

The degree of violence of an eruption depends principally on the chemical composition of the magma. Of
major importance is the interplay between the proportion of silicon dioxide (SiO2 or silica ), which controls
the viscosity of the magma, and volatile components, such as water, carbon dioxide, and sulfur dioxide.
Magmas that are poor in silica usually release their gases non- explosively and produce slow-moving lava
flows, like those commonly seen in Hawaii. Although such eruptions can be destructive, humans can
usually avoid the lava flow and are rarely threatened by such volcanic activity.

Low silica magma, typical of Hawaiiain volcanoes, produces lava flows that move slowly and can rarely
overtake a human who wants to escape.

Because buildings and structures can not easily be moved out of harms way, even slow moving lava
flows can cause significant property damage.

Under certain conditions, however, the magma and surrounding rocks are blown apart by the release of
volatiles, resulting in a dangerous explosive eruption, as happened on May 18, 1980 at Mount St. Helens,
near Portland, Oregon. With only about 0.5 cubic km of erupted magma, however, this was by no means
considered a large volcanic eruption. The 1991 eruption of the Pinatubo Volcano, near Manila in the
Philippines, was approximately 14 times larger, involving about 7 cubic km of magma. But even the
Pinatubo eruption is relatively small compared to infrequent giant eruptions of volatile- and silica-rich
magma that have occurred throughout the history of the Earth.

In the early 1900's a chemist could analyze about 200 samples per year for the major rock-forming
elements. Today, using X-ray fluorescence spectrometry, two chemists can perform the same type of
analyses on 7,000 samples per year.

Major-element chemical analysis is a front-line tool in the study of volcanoes and volcanic hazards. The
analysis of a volcanic rock provides a fundamental common ground for comparing the styles and violence
of previous eruptions of similar composition. During the first half of the 20th century, these analyses were
performed exclusively by classical wet chemical analyses chemically separating each element of interest
from the other elements in the sample. This procedure was extremely laborious. A good analytical
chemist could analyze only a couple of hundred rocks per year for their complete major element
chemistry. U.S. Geological Survey scientists now use technology called X-ray Fluorescence Spectrometry
(XRF) to perform the same type of analyses.

XRF Spectrometry starts at the atomic level. Atoms consist of protons and neutrons in a central nucleus
with electrons in different orbitals around that nucleus. If an electron from an inner orbital is knocked out,
the vacancy created is filled by an electron previously residing in a higher orbit. The excess energy
resulting from this transition is dissipated as an X-ray photon with a characteristic wavelength. In X-ray
fluorescence analyses, the electron vacancies are created by bombarding the sample with a source of X-
rays or gamma rays most frequently from an X-ray tube or a radioactive isotope. By detecting the
characteristic X-rays that are fluoresced, the element of interest is shown to be present in the sample.
The more abundant the X-rays are, the more of that element is present in the sample.

Bombarding the sample with X-radiation does not require a liquid sample. In fact, because solid samples
are more stable than liquids, virtually all samples presented to X-ray spectrometers are solids.
Furthermore, there is almost no permanent change that takes place in a solid sample analyzed by XRF,
allowing it to be saved and reanalyzed. This is especially important for the repeated analysis of the same
calibration standards over periods of years, permitting the use of the same analysis protocol.
Homogeneity requirements are frequently solved by dissolving a portion of the pulverized sample in
molten flux that is then poured into a mold and cooled to form a solid glass disc with a precise, flat,
analytical surface.

To analyze samples by X-ray fluorescence spectrometry, samples are fused at 1120xC with a flux; the
chemist then pours the molten mixture into special molds to produce solid glass discs with a precise
analytical surface.

A team of two analysts, using this method, can analyze over 7,000 samples a year. Because so many
more analyses are now available, geologists can answer more difficult types of questions such as what
changes are happening in the magma chamber during an eruptive cycle.

At a number of frequently active volcanoes, such as Mount St. Helens (which has erupted about every
100 years), a thick and complex sequence of volcanic rocks has been deposited. Geochemists and
geologists can reconstruct the eruptive history of the volcano through field studies and analyses of these
rocks. They conclude that the eruptive activity at Mount St. Helens is separated by longer periods of
repose. Like many other volcanoes, there are systematic changes in major- and trace-element
composition through time. The 1980 eruption appears to be at the end of a chemical cycle that began
about 500 years ago.

With this information we can predict the style, frequency, and warning signs of future eruptions. Newly
erupted lava, pumice, or ash may then be evaluated in a historical context. In some instances, XRF
analyses can be rapidly completed in less than 24 hours by express delivery of the samples to the lab
and electronic transmission of data back to the volcano being examined. This is something that would
have been impossible for the classic chemist.

While systematic changes in overall chemistry contribute a great deal of information about a volcano,
there is still a desire to understand more about what happens deep within the Earth s crust how the
magma forms and what triggers the volcano into eruption. Return to this point in index.

Application of instrumental neutron activation analysis

Some of our understanding of the source of molten magma has been obtained by analyzing rocks for a
group of 15 elements called the rare-earth elements (REE). In a type of rock called basalt, the total
amount of all the REE s is often less than 100 parts per million (ppm).

One well proven analytical technique used to determine the concentrations of REE in rocks and minerals
is instrumental neutron activation analysis (INAA). In this technique, a rock or a single mineral that the
rock contains is irradiated using a nuclear reactor. This causes the elements to become radioactive and
to emit gamma rays with distinct energies. The sample is then placed on a detector that measures how
many gamma-rays of these energies are emitted. The number of distinct gamma rays emitted is
proportional to the abundance of that particular element.

To get better sensitivity necessary to measure rare-earth elements in specific rocks, samples can be
irradiated in a low-power reactor. It turns some of the element into an unstable isotope whose decay can
then be detected and counted to determine the quantity of the element in the sample.

To understand what the REE can tell us about how magmas are formed, scientists have developed
mathematical formulas. These formulas suggest that when certain minerals interact with molten rock,
there can be appreciable effects on the rock s REE contents. In a process called partial melting, for
example, if a source rock contains minerals (such as garnet) that can hold high concentrations of certain
REE, then these elements tend to be prevented from entering the molten rock. Because Hawaiian basalts
have low concentrations of the heavier REE, and garnet has high concentrations of heavy REE, some
Earth scientists conclude that the magmas have formed by partial melting of a source rock that contains
garnet, and the garnet held back the heavy REE. Return to this point in index.
The smallest clues

To understand more about the causes of eruptions, geologists have to look more closely into the fine
details of the solidified magma samples to find a record of the conditions before and during eruption.
Mineral crystals within magmas vary in composition depending on the surrounding magma and the
temperature at which they are formed.

Why do some volcanoes explode catatrophically with rapid, life-threatening devastation? Recent research
indicates that magma does not necessarily move directly from its source to an eruption. A magma
chamber may contain stable reservoirs or layers of one composition with a lower temperature.
Subsequent influx and mixing of a second higher temperature lava overheats the mixture, triggering an
explosion. The 1991 Pinatubo eruption appears to have been triggered because a hot, low-silica basalt
magma penetrated a stable resevoir of cooler, high-silica type, forming an explosive mixture. The
explosion forced the closing of Clark Naval Air Station and interrupted numerous air flights because of
ash clouds that damaged engines.

Mineral compositions from the 1991 eruption of Mt. Pinatubo indicate that low-silica magma at a
temperature of about 1,250 C mixed with high-silica magma (780 C) just before the eruption. Based on
this information, volcanic rocks produced in previous eruptions were analyzed. The results suggest that
the 1991 eruption is the latest in a series of eruptions that were triggered by the mixing of magmas.
Magma mixing has also triggered eruptions at a number of other volcanoes.

Shortly after World War II, physicists in the United States, England, Germany, and Japan began to perfect
a new analytical instrument called the electron microscope. Instead of producing a visually magnified
image, this new instrument accelerated and focused electrons through a column of magnetic lenses onto
a small spot on the sample. The ability to magnify objects is limited by the energy or wavelength of the
radiation that is used to observe the object. Because the accelerated electrons from the column have a
much shorter wavelength than light, it is possible to produce images at much higher magnifications than
can be obtained using an optical microscope. Today, the most powerful electron microscopes can
produce images at magnifications as high as 1 million times.

When electrons are accelerated into an object, they interact with the atoms in that object and produce
three important types of radiation: (1) X-rays (you may scroll back to the picture of the Early 1900's
Laboratory where a description of how X-rays are formed was presented for the related technique of X-
ray Fluorescence), (2) the secondary electrons that are used to see the sample, and (3) back-scattered
electrons, which are bounced back as a function of the mass of the sample.

In the 1950 s, the French physicists, Castaing and Guinier, developed an instrument based on the
characteristic X-rays produced by the electron bombardment of the sample. This instrument can measure
the number of X-rays emitted from the small spot irradiated on the sample. By counting the X-rays
produced, Castaing determined the chemical composition of a portion of a sample no larger than the size
of a human blood cell. This new instrument was called the electron microprobe (EMP).

During the same period of time, another instrument was brought into production the Scanning Electron
Microscope (SEM). Like the electron microscope, it uses the secondary electrons created from the
sample s surface to record an enlarged image of the object. Its principal advantage is that it deflects the
electron beam and scans it back and forth over the sample surface (called rastering) in a pattern similar
to that in which wallpaper covers a wall.

In order to see objects smaller than what normal light allows, scientists have developed an instrument
that accelerates electrons. The Scanning Electron Microscope uses electromagnetic lenses to focus the
electrons, since glass lenses cannot.

The secondary electrons are continuously detected, and the signal is directed to a television monitor
where the image is displayed. Zooming in or backing out by changing the size of the raster area (hence
changing the magnification), the scientist can use the enlarged image to aim the scanning electron
microscope. At the same time, X-rays characteristic of the composition are generated. These X- rays can
be detected by an X-ray analyzer and used to create a map of the element's abundance.

In this example, calcium X-rays produced from a pinhead-size sample from the 1991 eruption of Mt.
Pinatubo are mapped and color coded by a scanning electron microscope to show the range of calcium
content from high (white) to low (green).
Analyzing a single particle of smoke

Because of their similarities, EMPs and SEMs overlap in their capabilities. The modern EMP has become
a true hybrid that combines the viewing capability of the SEM with the analytical power of the electron
microprobe. Both EMPs and SEMs are capable of obtaining images at magnifications over 100,000 times.
These instruments can see and then analyze something that wouldn't show up with a light microscope,
such as the following single particle of volcano smoke in this picture.

After seeing the invisible, the next question is "wonder what that's made of?" "Is it bad for our health?"
Small samples like this particle of volcanic smoke, the size of a single human red blood cell, can be
analyzed by a scanning electron microscope in 4 minutes with errors of less than 1 percent.

Analytical chemistry in the search for ore deposits

Analytical chemistry plays a key role in our continuing quest to understand how ore deposits form and in
the practical exploration for ore deposits. If you pick up an ordinary rock that builds the crust of the Earth
and determine its chemical composition, for every billion atoms, 1 to 10,000 atoms will be metallic
elements such as gold, silver, platinum, mercury, copper, cobalt, nickel, chromium, lead, zinc,
molybdenum, tin, and tungsten. Natural processes in the Earth s crust have the remarkable ability to
concentrate and purify certain rare metallic elements to form unusual deposits of minerals that contain
1,000 to 10,000 times the amounts found in ordinary rocks.

With today s modern mining and extraction technology, it has become possible to mine very low-grade
deposits. For example, gold can be economically recovered from rocks that contain less than one tenth of
an ounce of gold per ton of rock. But gold continues to be expensive because of the cost in locating the
deposit, mining the rock, and extracting the small amount of gold in each ton of rock. All of the inorganic
raw materials used to manufacture the products of today s technological society have to be either mined
or recycled.

Almost every process that takes place in the Earth s crust, whether from the action of molten rock, heat
and pressure at depth, hot springs or steam, running water, weather, or biological activity can contribute
to the formation of an ore deposit. Geologists use the principles of chemistry to try to understand how
these processes scavenge elements from ordinary rock, transport them, and concentrate them to form an
ore deposit. Geologists have developed models that describe the physical characteristics and chemical
composition of each ore deposit type and how they relate to the geologic environment in which they form
similar to the way biologists describe how an organism fits into a particular environmental niche.

In North America and many other parts of the world, almost all of the rich ore deposits exposed at the
surface have already been discovered. Most of the ore yet to be found is not visible to the human eye.
Therefore, geologists have had to improve their understanding and develop more sophisticated ways to
detect where ore deposits can occur.

Two main approaches are used to detect deposits hidden below the surface. One uses the ore-deposit
model, and the other is based on the detection of a dispersion halo that extends for some distance from
the deposit (for more discussion of dispersion halos, scroll down to the section on "Mapping the
Chemistry of the Earth's Surface") .

The following analogy shows how geologists use ore-deposit models. If all but the tip of the tail of an
elephant was buried by a landslide, a biologist could recognize from the skin, hair, and shape of the
appendage that the tail belonged to a mammal. With advanced testing of tissue samples, a biologist could
prove that the tail belongs to an elephant and could easily predict that the body should be buried about 1
meter below the tip of the tail.

Most ore-deposit models are not as advanced as biologists models for elephants, but a few are nearly so.
Several copper and molybdenum porphyry deposits, located as deep as 2,000 to 4,000 feet below the
surface, have been discovered based on small surface exposures measuring several feet across. These
exposures were of breccia pipes (vertical pipe-shaped bodies of pulverized rock), which are known to
extend thousands of feet above the main body of porphyry deposits. Because not all porphyries contain
deposits of economic metals, geologists can collect and analyze field samples to determine what metals
the porphry will contain, and if it is worth drilling.

Schematic cross section of a copper-molybdenum porphyry model. Explosive release of steam and gases
during the cooling of the intrusion result in the formation of pipes filled with broken rock fragments that
extend for thousands of feet towards the surface and often contain fragments of the ore body present at
depth.

Analysis of fossil fluids and gases from tiny time capsules

A great many ore-deposit models are tied to the cause of formation of the deposit. Questions about the
environmental conditions related to formation of the deposit are temperature, pressure, source of the
metals, and composition of any fluids and gases that transported and formed the ore or associated
minerals.

Many crystals in the Earth s crust have formed in some kind of fluid. Small quantities of the fluid that
surrounded the crystals during growth are commonly trapped as tiny fluid inclusions within these crystals.
In many cases, these fluid inclusions are less than 0.1 mm but record important information about the
conditions when the ore was being formed.

Trapped in a time capsule the same size as the diameter of a human hair, the ore-forming liquid in this
inclusion was so hot and contained so much dissolved solids that when it cooled, crystals of halite,
sylvite, gypsum, and hematite formed. As the samples cooled, the fluid shrank more than the surrounding
mineral, and created a vapor bubble. Heating the inclusion to the temperature at which the bubble is
reabsorbed and daughter crystals dissolve gives an estimate of the minimum temperature at the moment
of ore formation.

Current understanding of movements within continents reveals that throughout the Earth s history periods
of large-scale fluid movements occurred in the Earth s crust. Some of these fluid migrations resulted in
the deposition of metallic ore deposits and accumulations of oil and gas.

Characteristics of fluid inclusions are extremely variable. In the simplest case, when fluid inclusions cool
from the elevated temperature at which they formed, the liquid shrinks and separates into a liquid and a
vapor bubble. Detailed microthermometric studies give a reasonable estimate of the temperature at which
the mineral was formed. Studies of this type reveal that the inclusions were trapped at temperatures from
less than 50 C to over 600 C and at pressures equivalent to what is experienced at the Earth s surface
and ranging to what would be found several kilometers deep.

Because of the extremely small size of so many fluid inclusions, determining the composition of the
trapped fluids is difficult. First, the total amount of dissolved solids is determined by observing with a
microscope the freezing/melting points of the inclusions. The sample is then crushed and rinsed with
water. This water is recovered and analyzed by using a sensitive analytical technique to determine the
ratios of the elements contributed by the trapped fluid. These ratios are used to calculate the composition
of the fluid. The compositions range from aqueous solutions with salt content similar to rainwater to fluids
with dissolved solid concentrations of over 60 percent nearly 20 times the amount found in seawater.

Analytical data on fluid inclusions are needed to understand the chemical and physical processes
involved in the formation of economic mineral deposits. These data are also critical in understanding
modern mineral-deposit models, which promote cost-effective mineral exploration vital to our healthy
industrial economy.

Most fluid inclusions contain dissolved gases, and in some environments the inclusions consist entirely of
gases. Recently, the USGS has designed a gas quadrupole mass spectrometer (QMS) that will analyze
the amounts and chemical identity of gas ions in small gas samples (for more details on QMS
instruments, scroll back to the QMS illustration in the "Disaster from Space" section). This instrument is
extremely sensitive (8 parts per billion detection) and capable of millisecond speeds of analysis important
for gas bubbles as small as 1/100 of a millimeter in diameter.

The QMS is used extensively to study ore- deposit models as well as environmental and geologic
hazards. Examples include: identifying carbon dioxide as the responsible gas at the Lake Nyos,
Cameroon disaster where 2,000 people suffocated in 1986; tracking atmospheric gases from bubbles in
climate- study ice cores of Greenland and Antarctica; tracing dispersal of smokestack emissions and
gases of geothermal energy wells and springs.

Scientists sample air trapped in the snowpack at the Greenland Ice Sheet Project 2 site in Central
Greenland. These samples will be analyzed by mass spectrometry to determine the composition of
ancient air. These studies help us to predict climate changes.
IIc. Environment

Global change in the geologic past

An exciting new application of the QMS instrument uses a high-energy laser fired through a modified
microscope to open individual gas inclusions in ice. Ice from Greenland and Antarctica contain
atmospheric gases that were captured in snow as it formed. The gases were retained as the snow turned
into ice and formed bubbles. Analysis of these bubbles provides detailed information on the past
composition of the atmosphere.

Sea-level changes, changes in solar activity, and, according to some astrophysicists, even the signals
from distant supernovas, are also recorded in the ice. Compiling and studying this record helps us to
evaluate current changes in the atmosphere and to predict future trends. Ice-core studies provide
valuable information about the levels of human pollution, past climate patterns, sources of moisture, the
altitude of the ice when it formed, frequency and magnitude of natural events, and biological activity at the
ocean surface. Return to this point in index.

Air bubbles, amber, and dinosaurs

Ages of ice samples found on the Earth cover a span approaching 200,000 years. But how can we tell
what the Earth s atmosphere was like before that? Recently, USGS scientists have used a gas QMS to
determine the oxygen level of ancient samples of Earth s atmosphere from a most unlikely place amber.
The fossilized resin of conifer trees, amber is interesting to scientists as a medium that traps insects,
small animals, and plants, preserving them through geologic time for future study.

Amber --the fossilized resin of conifer trees--provides a unique means of protecting intricate samples of
the past. This mosquito, lying trapped for 45 million years in a piece of amber, is almost perfectly
preserved.

The recent extraction by scientists, of ancient DNA from organisms entombed in amber much like in the
science-fiction novel and movie, Jurassic Park is an example of why scientists are intensely interested in
amber. Minute bubbles of ancient air trapped by successive flows of tree resin during the life of the tree
are preserved in the amber. Analyses of the gases in these bubbles show that the earth s atmosphere, 67
million years ago, contained nearly 35 percent oxygen compared to present levels of 21 percent. Results
are based upon more than 300 analyses by USGS scientists of Cretaceous, Tertiary, and recent-age
amber from 16 world sites. The oldest amber in this study is about 130 million years old.

This 84-million-year-old air bubble lies trapped in amber (fossilized tree sap). Using a quadrupole mass
spectrometer, scientists can learn what the atmosphere was like when the dinosaurs roamed the earth.

The consequences of an elevated oxygen level during Cretaceous time are speculative. Did the higher
oxygen support the now extinct dinosaurs? Their demise was gradual in the transition from late
Cretaceous to early Tertiary times, as was the decrease in oxygen content of the atmosphere.

This chart shows a major decrease in oxygen content in the atmosphere from 35 percent to the present
day level of 21 percent. This decrease occured about the same time that the dinosaurs disappeared--65
million years ago.

Recent methane emissions from Gulf Coast marshes

The Earth s atmosphere is still changing. Natural environmental processes (geological, biological, and
geochemical) produce carbon dioxide (CO2) and methane (CH4). These gases, along with water vapor,
are responsible for trapping heat at the Earth s surface.

Because biological processes are responsible for the production of methane in environments where
organic matter ferments, wetlands (swamps, bogs, etc.) were previously the principal source of methane.
Now, however, the combination of rice cultivation and cattle raising have taken over as the principal
contributor. Studies of methane sources help us to understand their relative contributions and the factors
that control the methane production and release to the atmosphere.

The studies show that when coastal wetlands are flooded by sea-level rise, salt marshes are inundated,
up-slope brackish marshes become saltier, and some fresh marshes near the coast become brackish.
Consequently, total methane emissions decrease because salt marshes do not produce as much
methane as fresh marshes.

Fifteen miles inland from the Gulf of Mexico in a brackish marsh in Terrebonne Parish, Louisiana,
methane emissions are collected in inverted buckets and measured with a portable gas analyzer. Using
these measurements, scientists can determine one effect of global sea-level rise.

USGS studies of methane in Gulf Coast Louisiana indicate that brackish marshes emit between one-
fourth and one-half the methane of the fresh marshes they replace during sea-level rise. The results of
these local measurements in Louisiana can be used to project the world-wide effects of sea-level rise on
methane emissions. By the year 2050, projected world-wide, sea-level rise will replace 50 percent of
coastal fresh-water marshes with brackish water marshes. This will reduce the world s methane
emissions by 2 percent. Return to this point in index.

IId. Pollution

Acid rain steals our heritage

In addition to affecting people, plants, and wildlife, air pollution also affects rocks and soils. One of the
problems it causes is the degradation of buildings and monuments, especially those built out of limestone
or marble. These rock types, both almost pure calcite (calcium carbonate), are commonly used
throughout the world as a building stone.

These balusters, on the Pan American Union Building, Washington, D.C., were made from Georgia
marble, and were installed in 1910. They demonstrate the effects of dry deposition of sulfur dioxide, which
causes the formation of gypsum. Gypsum traps particulate matter to form heavy, black incrustation. In
some areas, the gypsum crust has flaked off the balusters exposing a fresh but very rough surface.

Studies to determine damage caused by air pollution have pointed to changes in the acidity of the air and
rain. In fact, the term acid rain is now commonly used in the media as well as scientific studies. Acid rain
affects carbonate stone buildings and monuments in two ways. The first is by dry deposition of sulfur
dioxide gas, increasingly contributed to the atmosphere by the combustion of fossil fuels. The gas reacts
with calcium-carbonate building stone to form calcium sulfate (gypsum). As gypsum forms on the surfaces
of the stone, it traps particulate matter, forming a blackened crust.

The second effect of acid rain is wet deposition. Natural rain water is a weak carbonic acid solution and
all carbonate-stone surfaces that are washed by rainwater are subject to gradual erosion. This erosion is
accelerated, however, by the increased acidity of rain in the eastern United States, which is often 10
times greater than in areas where acidic pollutants are absent.

Current research on acid rain is directed at defining the degree of stone damage due to both dry and wet
deposition. Scientists are measuring the effects of acid rain on historic stone buildings and monuments
across the country. They are exposing samples of marble and limestone to weathering at specific field
sites and simulating depositional processes under highly controlled laboratory conditions.

The effects of both dry and wet deposition are evaluated by the chemical analyses of the stone surfaces
before and after exposure and of rain run-off solutions collected from test slabs.

Recent research by the USGS and other agencies conducted under the National Acid Precipitation
Assessment Program has shown that test samples of marble erode 15 to 30 micrometers per year, while
limestone (which is less compact than marble) erodes from 25 to 45 micrometers per year. (These
measurements are slightly less than those of the diameter of a human hair). Approximately 20 percent of
this erosion is caused by acid rain. The remaining 80 percent is the result of the natural solubility of the
stone in rain water. Because the effects of acid rain only develop over an extended period of time, high-
precision analytical chemistry plays a central role in measuring these effects. Return to this point in index.

The chemistry of mine drainage

Mine drainage is water that drains from mines. The water can be of the same quality as drinking water, or
it can be very acidic and laden with high concentrations of toxic, heavy metals. In general, the more acidic
the water is, the poorer the water quality.

Because the chemistry of water samples can rapidly change if they are removed from the natural site,
many measurements are made in the field. One of the first of these field measurements is for acidity,
which is read by a meter and reported as the pH of the sample. Water with a pH of 2 has a high
concentration of hydrogen ions and is acidic, whereas water with a pH of 7 is neutral. A study of mine
drainage in Colorado, for example, shows that the pH of mine waters ranges from a low of 1.8 to a high of
8.

A companion field measurement made on mine water is for specific conductance. This property of water
measures the electrical conductivity associated with a water sample and is useful as a quick estimate of
total dissolved solids. A low number from 10 to about 200 microsiemens/centimeter (the unit of specific
conductance measurements) could be considered to be drinking-water quality. Specific conductance
measurement of mine waters in the Colorado study range from 100 to 38,000 microsiemens/centimeter.

The full characterization of mine water requires a number of other instrumental and analytical
measurements that are carried out using both mobile and laboratory facilities. Three main, instrumental,
analytical techniques are used to complete the characterization of mine-water samples. These techniques
are: ion chromatography (IC), which is used to determine the concentration of fluoride, chloride, nitrate,
and sulfate in aqueous samples; ICP-AES, which determines the concentration of major and trace
elements(for additional discussion on ICP-AES and an illustration of the instrument, scroll down to
"10,000 element determinations a day"); and liquid ICP-QMS , which is used to determine elements below
the ppm level (for additional discussion and an illustration of a laser ablation ICP-QMS instrument, scroll
back to the "Disaster from space" section).

Why is it so important to characterize mine drainage? Because mine- drainage water almost always flows
into a stream where it can dramatically affect the aquatic organisms and the quality of the water received
by downstream communities. To successfully reduce the effect of the toxic elements, their abundances
must be known.

Mineral-laden water from the Argo drainage tunnel in Colorado, entering into Clear Creek, illustrates the
possible environmental impact of untreated mine drainage.

From the analytical chemistry of mine drainage, scientists have concluded that the major cause of high
acidity of the water is the bacterially catalyzed oxidation of the mineral pyrite. This acidity stimulates the
dissolution of many other sulfide minerals, resulting in the high concentration of metals such as copper
and zinc.

While it is difficult or impossible to stop mine drainage, it might be possible to cut back the rate of the
introduction of toxic elements into the environment. This can be done by hindering the bacteria that speed
up the oxidation of the pyrite or by neutralizing the drainage and extracting toxic elements. Recent studies
have shown that wetlands can concentrate heavy metals from mine drainage. Constructed wetlands
could, therefore, be used to accumulate the pollution from mine drainage. By analytical monitoring of the
toxic, metal build-up in these wetlands we can avoid any impact on the wildlife that might try to live there.
Return to this point in index.

IIe. Pollution Prevention

Cleaning up coal burning

While hundreds of abandoned mines across the country are releasing pollutants, active mines can also
produce pollutants. Among the best examples of air and water pollution control are advances in coal
technology. For years coal has been a major source of both energy and pollution in the United States.
Supplies of natural gas and petroleum are dwindling. Alternative energy sources are not expected to
contribute significantly to the energy needs of the United States in the near future. Coal will continue to
play an important role for energy production through the first half of the 21st century.

Significant improvements in coal processing and burning in modern power plants have dramatically
reduced pollution. The process has been improved in three ways. First, sophisticated equipment has
significantly reduced fly ash and soot compared to the equipment used many years ago; other specialized
equipment greatly reduces sulfur-dioxide emissions.

Coal, a major source of energy in the United States, does not have to cause pollution. This coal-burning
power facility at Brilliant, Ohio, uses a process wherein sulfur-dioxide emissions are cut by 90 percent,
nitrogen oxides by 50 percent, and carbon dioxide by 15 percent. (Photo provided by American Electric
Power Service Corporation).
A second way of reducing coal pollution is by selective mining of low-ash and low-sulfur coals that pollute
less. Detailed chemical analyses of coal prior to mining is required to determine the concentrations of
ash, sulfur, and other toxic elements. A new multielement analytical technique that introduces the sample
in liquid form to an inductively coupled plasma quadrupole mass spectrometer (ICP-QMS) is proving very
useful for this purpose. This technique can determine over 70 elements at the ppm to ppb levels. To
analyze coal by this method, it must first be converted to ash, fused with a flux, and dissolved. The
solution is then sprayed into the 7,000 C thermal environment of an argon plasma where it is ionized. The
resulting charged atomic particles are drawn into a high vacuum portion of the instrument where a
quadrupole mass spectrometer (shown in Disaster from space section) separates and counts the number
of atoms for each different mass.

Detailed mapping of trace elements in a coal seam may be required to locate low-polluting coal
resources. The major drawback to selective mining is that only small quantities of clean coals exist, and
those that can be found may be too far from power plants or too deep to be economically recovered.

The principal source of sulfur emissions from burning coal is pyrite, whose presence is shown in this
photomicrograph of a coal sample.

Coal cleaning is the third method for reducing pollution. Sulfur minerals such as pyrite can be removed by
using various techniques. Chemical analysis of the coal and identification of mineral inclusions
determines what cleaning procedure will be most effective. This requires looking at the coal under high-
power microscopes or performing tests that separate mineral and coal species by using complex physical
and chemical techniques. Understanding the chemistry and mineralogy of coals has contributed
significantly to the progress that has been made in recent years toward the prevention of coal-burning
pollution.

For additional information on environmental geochemistry order a paper copy of Understanding Our
Fragile Environment USGS Circular 1105 a publication in the Public Issues in Earth Science Series.
Return to this point in index.

---------------------------------------------------------

III. Mapping the Chemistry of the Earth's Surface

IIa. Assessment of public lands

Mapping stream sediments for resource exploration

The successes of the old-time and latter-day prospectors have diminished the likelihood for the discovery
of additional mineral resources on the surface of our planet. Yet our national and global dependence on
mineral resources continues to grow unabatedly, and recycling can only provide a fraction of our needs.
By necessity, today s search for the many minerals vital to society is focused on ore deposits that lie
beneath the Earth s surface.

Earlier in this Session (back by the illustration of the copper-molybenum porphyry cross section) we
discussed the use of models to locate ore deposits . Another way of locating mineral resources is by
identifying element-dispersion halos. Dispersion halos are abnormal levels of the metals that develop
around deposits. This halo can extend for long distances from the deposit and, once recognized, can be
used to trace down the source. The most familiar example of a halo is the dispersion of gold nuggets in
drainages downstream from gold mother lodes.

Using today s technology, collected stream-sediment samples may be processed and analyzed for as
many as 40 elements, giving an indication of very faint halos at some distance from a variety of deposit
types. If elements of economic interest, such as gold, silver, copper, lead, or zinc, are present, they will
be revealed in these analyses. This process is repeated for many samples until the entire study area is
covered.

By evaluating our nation's mineral resources, we can determine tha appropriate use of Federal lands.
Helicopters have little impact on the land and can be used in remote areas, such as Alaska, to efficiently
gather samples for geochemical analysis.

Keeping track of the resources of our country
Congress has mandated the USGS to assess the mineral-resource potential of public lands, especially
those lands set aside as wilderness or proposed wilderness. These assessments provide an inventory of
mineral resources for future generations. In 1964, the Wilderness Act was passed, and a 20-year
program to assess the mineral resources of U.S. Forest Service wilderness areas began. A large amount
of this work involved the analysis of stream sediments to determine the presence or absence of halos.
Subsequent laws have required mineral-resource assessments on additional public lands. The USGS
also works with the Bureau of Indian Affairs and individual tribes to assess the mineral resources on
Indian lands.

By relating ore-deposit models and geochemical data to geologic observations and plate tectonic theory,
geologists can predict what types of ore deposits may be found in a given geographic area. The USGS
supplies this information to the public and to other government agencies. Assessments are published by
the USGS for use by land-use planners, Federal, state, and local government agencies,
environmentalists, and private individuals. Many maps, such as the map of lead in stream sediments of
Colorado, are useful for both resource evaluations and environmental assessments.

This map of lead in Colorado stream sediments was generated with existing data from the National
Geochemical Data Base. It shows the presence of a geochemical halo from the Colorado Mineral Belt
and also the lead caused by industry in some cities.

It is important to weigh the mineral-resource potential of a tract of land against other potential uses such
as water resources, grazing, forestry, recreation, tourism, and scenic value. Chemistry plays a vital role in
this assessment process. Return to this point in index.

Mobile laboratories

Looking for halos of mineralized areas or testing for pollution is like playing the game of hide-and-seek.
The target can be found more easily if you are given hotter or colder clues. To provide these clues,
analysts in mobile laboratories perform chemical analyses for geologists in the field. As a result, samples
can be evaluated quickly. The use of mobile laboratories by the USGS dates back to the turn of the
century. These pictures show a mobile laboratory used in Montana. It was a horse-drawn wagon that
carried the necessary reagents, glassware, etc. that were set up in a tent.

You can find the halo of a deposit faster, if you are constantly aware whether you are getting closer or
farther away. To rapidly determine the distribution of elements in the field in 1907, portable analytical
equipment was used.

When the field area was reached, a tent was set up and wet chemical analyses were performed in
primative conditions from a kneeling position.

Over the years, the mobile laboratories have become more refined. Since the 1960 s, these laboratories
have provided USGS geologists and geochemists with over 1 million analyses, providing timely
information for evaluating the mineral-resource potential of public lands.

Today, the principle of mobile laboratories is the same as the wagon of earlier times, but the equipment is
significantly more advanced in technology. In addition to clean, relatively comfortable surroundings, which
are protected from the weather, sophisticated electronic equipment shown here can be used to run a
large number of sensitive analyses.

Exploration for covered ore deposits

Ore deposits covered by transported overburden, such as gravels, are more difficult to locate than ore
deposits that are buried in the host rock in which they formed. New research using super-sensitive
analytical techniques provide scientists with a way to see through that covering.

This research is based on the idea that buried ore deposits may release trace amounts of ore-related
elements that are transported through the overburden. These trace elements that are found at the
surface, however, may have been originally introduced with the transportation of the overburden and don
t necessarily indicate the presence of a covered ore deposit. The ability to distinguish between the trace
elements already in the overburden and those migrating from an ore deposit would provide a powerful
tool for subsurface exploration. Two of the methods that are currently being researched by the USGS are
ground-water analysis and selective chemical extractions of overburden samples for the loosely bonded
migrating elements on the surface of the gravel fragments.
Ground water collected from wells, springs, and drill holes may provide clues to the presence of covered
deposits. This water moves very slowly through the overburden until it discharges at the surface as a
spring or seeps into a body of water. Subsurface flow rates vary from almost zero to over 100 feet per
year. The slower rates cause water to have a longer contact time with the subsurface gravels, rocks, and,
if present, ore deposits, permitting minute amounts of metals to be leached from the rocks.

Geochemists can sample water from previously drilled holes to detect the "halo" of an ore deposit.

Detecting gold in a ground-water dispersion pattern requires an extremely sensitive analytical technique.
The USGS has developed a method for detecting gold in water at the one-part-per-trillion (ppt)
determination level. One ppt could be represented by one marble on 20,000 football fields (almost 39
square miles) covered with marbles.

In this technique, gold ions are removed from relatively large-volume water samples by the use of anion-
exchange resin, in a manner similar to the exchange of ions that takes place inside a commercial water
softener. Later, the gold ions are stripped from the resin and analyzed using graphite-furnace, atomic-
absorption spectroscopy. (AAS is discussed in the Maps of natural contamination section)

The USGS is working on a new method to gather information from nonproductive drill holes.

Using a simple device, a ground-water sample is recovered from the drill hole in hopes that it will show
proximity to an ore deposit.

The relatively large dilute water sample is filtered and stabilized prior to being transported to a mobile
laboratory for analysis.

Mineral scavengers provide a clue

There is another way of detecting the trace elements carried from a deposit by ground water. Ground
water is drawn upward by evaporation at the surface. During this upward migration, trace elements in the
water are affixed to minerals in the overburden. The affixation, or bonding, may range from weak to very
strong. The strength of this bonding depends on the chemical nature of both the trace element and the
host mineral. The differences in bond strength is comparable to the difference between the weak
electrostatic attraction that holds an inflated balloon to a wall and a nail driven into a stud.

Minerals that are capable of scavenging trace elements from ground water with increasing bond strength
include hydrated aluminum silicates (clays), secondary carbonates, amorphous (noncrystalline) oxides of
manganese, and the amorphous and crystalline oxides of iron. Trace elements scavenged by these
minerals are removed by treating samples of overburden with chemicals that react selectively with each
mineral phase. Sequential selective extractions are used to release trace elements from the host minerals
in the order of increasing bond strength such as clays first and crystalline iron oxides last.

The principal advantage of selective extractions is that they facilitate the distinction of elements that have
migrated from other sources from those normally present in the overburden. Thus the presence of a gold
deposit in Nevada may well be indicated by the occurrence of gold, or its associated elements, arsenic
and antimony, in a specific mineral phase in the overburden. Return to this point in index.

IIIb. Geographic chemistry: National Geochemical Data Base

Chemistry of a nation on file

For many decades, samples of geologic materials (rocks, soils, sediments, waters, and others) have been
chemically analyzed. The geochemical data collected from these and other scientific programs and
projects provide the basis of a growing national geochemical data base. A part of the data base contains
chemical analyses of stream sediments from hundreds of thousands of drainage basins throughout the
United States. These analyses represent the chemistry of surface materials in these basins and may be
used in many applications concerning health, the environment, and natural resources. (Scroll up a few
screens to see the map of lead in stream sediments of Colorado.)

This map shows the coverage of analytical data generated during theNational Uranium Resource
Evaluationprogram that is stored in the National Geochemical Data Base.

10,000 element determinations a day
One of the principal methods of analyzing samples that shows up frequently in the National Geochemistry
Data Base is inductively coupled plasma-atomic emission spectrometry (ICP-AES). This method provides
a rapid and precise means of monitoring up to 50 elements simultaneously for minor- and trace- levels.
The ICP-AES technique is widely regarded as the most versatile analytical technique in the chemistry
laboratory.

When the sample solution is introduced into the spectrometer, it becomes atomized into a mist-like cloud.
This mist is carried into the argon plasma with a stream of argon gas. The plasma (ionized argon)
produces temperatures close to 7,000 C, which thermally excites the outer-shell electrons of the elements
in the sample.

In an inductively coupled plasma-atomic emission spectrometer the (1) aqueous sample is pumped and
(2) atomized with argon gas into the (3) hot plasma. The sample is excited, emitting light wavelengths
characteristic of its elements. (4) A mirror reflects the light through the (5) entrance slit of the
spectrometer onto a (6) grating that separates the element wavelengths onto (7) photomultiplier
detectors.

The relaxation of the excited electrons as they return to the ground state is accompanied by the emission
of photons of light with an energy characteristic of the element. Because the sample contains a mixture of
elements, a spectrum of light wavelengths are emitted simultaneously. Just as rain breaks sunlight into a
rainbow, the spectrometer uses a grating to disperse the light, separating the particular element
emissions and directing each to a dedicated photomultiplier tube detector. The more intense this light is,
the more concentrated the element. A computer converts the electronic signal from the photomultiplier
tubes into concentrations. The determination portion of the process takes approximately 2 minutes to
complete. In 1 day a chemist using the ICP-AES can analyze 200 samples for a total of 10,000 elemental
determinations. Return to this point in index.

IIIc. Public health and safety: Element maps of soils

Maps of natural contamination

Recently, an environmental problem was solved by mapping the soil chemistry in the San Joaquin Valley
of California. In the 1980 s, wildlife managers noticed increasing reproductive failure among nesting water
birds at the Kesterson Wildlife Refuge in the northern end of the San Joaquin Valley. Chemical analyses
of tissues from both birds and fish indicated toxic levels of selenium. The question was not only why, but
why at that particular time? What had suddenly changed?

Beginning in the 1870's, irrigation was used to turn the nonproductive desert land of the San Joaquin
Valley into the patchwork quilt of fields shown in this 1985 satellite image.

To avoid the loss of crops caused by the buildup of salinity in the soil, a drain was built to transport the
used irrigation water away. Instead of completing the projected 290-mile drain to the sea, it was halted
205 miles short of its goal, forming a wetland and leaving the water to evaporate.

Wildlife moved into the wetlands and prospered until selenium leached from marine shales in the Coast
Ranges built up in the food cahin and resulted in terrible mutations in higher life forms such as this baby
duck.

Irrigation of arid soils in the San Joaquin Valley began in the 1870 s, accumulating salts in shallow ground
water perched on impermeable clay layers. Within a decade, farmers recognized the need for drainage
facilities to lower the level of salts in the ground water or risk permanent loss of agricultural capacity, but
the problem persisted. Finally in 1960, California voters approved financing for the State Water Project
that included an extensive drainage system. Between 1968 and 1975, 85miles (of the projected 290
miles) of the San Luis drain facility had been completed with a temporary termination at Kesterson
Wildlife Refuge, still many miles short of the Sacramento-San Joaquin Delta, its projected destination.

By 1978, drainage into Kesterson had increased significantly. Unseen, the selenium levels were also
increasing and by 1982 had built to toxic levels in the food chain of the wildlife refuge. Fish were affected
first followed by waterfowl. Ultimately, the ponds were closed and filled in as the quickest solution to an
environmental disaster. One question remained where did the selenium come from? The eastern side of
the San Joaquin Valley has a deficiency in selenium, and, in fact, livestock grazing in the area needed to
have selenium added to their food as a supplement. So why did Kesterson have too much selenium?
Further chemical studies focused on the Panoche Fan, the source of most of the drain water. Through
chemical analyses using hydride generation atomic absorption spectrometry (AAS), high-selenium soils
were found and mapped near the mountain front on mud-flow debris derived from selenium-enriched
marine shales in the Coast Ranges.

AAS uses a bright source of the element s characteristic light, usually from a lamp whose cathode
contains a large amount of the element. This light is then passed through a cloud of non-excited, ground-
state atoms from the sample where it is absorbed proportional to the amount of the element present in the
cloud. Next, the light goes to a monochrometer, that separates the energy wavelength of interest. The
light is then converted into an electrical current, amplified, and rectified. A computer calculates the
quantity of the element in the samples.

With the selenium data generated by AAS, the source of the selenium in the wildlife refuge was studied.
Is the source of the selenium natural or caused by humans? The answer is both. The occurrence of
selenium in the soils and ground water of the Panoche Fan is perfectly natural. Humans, however, are
interacting with one part of the natural hydrologic cycle in which elements are transported from minerals
to the ultimate sink the ocean. Here the elements would have been naturally recycled by reprecipitation
as minerals in marine shales. By increasing the amount of rainfall (via irrigation), human activity has sped
up the leaching of selenium out of the Panoche fan sediments.

The temporary halt of the San Luis drain had left the project 205 miles short of the sea, and the drain
water was instead contained in holding ponds. The extra water turned the holding ponds into wetlands
where birds that used the flyway made nesting sites. The ultimate solution to the San Joaquin drainage
may lie in finishing the drain and discharging the water directly to the ocean so that nature can recycle it
into marine sediments again.

To some people, the Kesterson Wildlife Refuge has been considered an environmental disaster.
Nevertheless, it has served as an environmental lesson. The holding ponds demonstrate the feasibility of
creating wetlands to clean up some forms of metal pollution. They also prove, however, that if we create
wetlands for bioremediation, they cannot be built and left untended without risk to wildlife. The levels of
toxic elements being concentrated in the wetland will have to be monitored so that they do not build up to
levels that are toxic to wildlife. Return to this point in index.

Industrial sources of contamination

A similar cause for environmental concern is the presence of mercury in the agricultural soils of the
Panoche Fan. To the west of the San Joaquin Valley, a major, mercury mineralization district is located
near the town of Idria. The New Idria Mine, operated between 1858 and 1972, was the second largest
mercury producer in North America.

Streams that drain the north, east, and south sides of the mining district all contribute sediment to the
Panoche Fan. Chemical analyses of soil samples from the Fan clearly show that the soil contains
elevated mercury levels. These high levels of mercury could be caused by a combination of natural
geochemical dispersion and mining activity, considering the time period of major mercury production at
the New Idria Mine (scroll back to "Mapping the Chemistry of the Earth' Surface" for a discussion of
dispersion halos).

Like the selenium, the mercury data were generated using AAS. The rocks were digested and the
solution was then reduced to form elemental mercury, which was separated as mercury vapor and
measured with AAS. This method is called the cold vapor-AAS method. Return to this point in index.

Robots in the laboratory

Geochemical studies generate large quantities of samples to be analyzed in the laboratory. Although
technological advances have produced vast improvements in analytical measurements and data
reduction, the manual preparation of samples has remained a time-consuming problem. As a result, one
of the most rapidly growing areas in laboratory automation is the use of robotics for sample preparation.

What does a robot have in common with a technician= An arm, hand, and fingers. The robots arm moves
up/down, in/out, and rotates 360x.

The fingers grip tubes and flasks, and the hand rotates for pouring liquids.
A laboratory robot generally consists of an arm, a hand, and a pair of fingers. These components are
programmed to duplicate the sample preparation usually performed by a laboratory technician. The
centrally positioned robot moves samples in and out of laboratory work stations. Each work station
performs a specific function such as dispensing acids, mixing, heating, centrifuging, filtering, and
weighing.

There are several advantages to the use of robotics. Robots have improved productivity by a factor of 2
or 3. Because sample preparation requires the use of hazardous chemicals, the robot minimizes human
exposure to these chemicals. By delegating the repetitive applications to the robot, the technician is
available to assume greater responsibilities. Finally, robots provide consistency in sample preparation
and improve the precision of the data.

In USGS laboratories, robotics have been applied to a range of techniques including sample
disaggregation, the decomposition of tens of thousands of samples per year for the ICP-AES methods,
the weighing of 7,000 charges of flux per year for the XRF major element analyses method, and other
similar sample preparation methods. The use of laboratory robotics continues to increase as the benefits
from each application are realized. Return to this point in index.

---------------------------------------------------------

IV. Can we depend on chemical analyses?

IVa. Measuring quality

The importance of measurements

One of the tasks facing scientists is to measure and define unknown quantities. These measurements are
important because they can warn us of potential hazards from volcanoes and environmental
contamination or help us develop our mineral resources to stay competitive in the worldwide economy. In
Earth sciences, the measurements of geological samples are used in making policy decisions. These
decisions can affect all Americans in topics ranging from pollution prevention and control to evaluation of
mineral resources and wilderness areas.

Decisions are made every day based upon measurements of various substances (or areas containing
them). Without quality measurements, misleading or dangerous conclusions could be drawn.

The uncertainty of measurements

There are many difficulties associated with making measurements. Quality assurance involves minimizing
mistakes and correcting problems before the information is used.

When an archer releases an arrow at a target, both the distance from which the archer shoots and the
size of the target define what is considered acceptable accuracy and precision. Shots from 5 yards would
be expected to hit closer to the bull s-eye than shots from 50 yards. If the arrows miss the target
completely, the archer is considered inaccurate. The closer together repeat shots hit the target to each
other, the more precise the archer. The strength and dexterity of the archer, the acuity of the archer s
eye, the adjustment of bow sights, wind conditions, and the number of shots taken also contribute to the
accuracy and precision of the archer. A laboratory procedure is similar in the need to understand the
variables involved and the possibilities for error.

Searching for the best

A mistake in measurements can impact decisions made on endangered animal habitat, mineral
exploration, or remediation of an environmental problem. When a quality assurance program claims 99.9
percent accuracy, consider what that could mean in terms of error: 1 hour of unsafe drinking water per
month, 16,000 lost pieces of mail per hour, or 176,000 checks deducted from the wrong bank accounts
every day. The quest is for 100 percent accuracy and precision, even if it is not attainable. Return to this
point in index.

The USGS Reference Materials Project

In the field of analytical chemistry, reference materials serve an important role in the development of new
techniques and the periodic testing of established methods. Used correctly, reference materials provide
investigators with a mechanism to objectively compare their results with established values and
determine if any bias exists.

It was this drive to produce quality data that led the USGS and the Massachusetts Institute of Technology
in the early 1950 s to jointly develop the first geochemical reference materials. This early work started a
USGS tradition of preparing high-quality reference materials that are used for both domestic and
international geochemical programs. To date, 29 different geochemical standards have been produced
with an estimated worldwide distribution of over 20,000 units.

Samples are dried, crushed, powdered, mixed thoroughly, then bottled and analyzed. Some reference
materials are distributed by the USGS in limited quantities directly to researchers and analytical
laboratories. Other standards can be prepared on a contract basis for individual government agencies.
Geologic analytical laboratories can compare their results to these standards.

Initially, the need for quality control led to the development of several silicate rock standards that were
important in such diverse activities as the lunar program, ore-genesis studies, and volcano monitoring.
When the mining and exploration industries clamored for reference materials, the USGS responded by
generating six exploration standards designed to contain elevated concentrations of key elements. The
USGS involvement with the mining industry continues today with the recent development of coal and
gold-ore standards, which will be useful in resource appraisals.

Environmental concerns are becoming a major part of the national agenda, and the USGS Reference
Material Project provides quality reference material to aid in this field of study. A major emphasis of this
effort will be to conduct cooperative studies with other Federal agencies, thus helping them respond to
national needs. Return to this point in index.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:3
posted:9/12/2011
language:English
pages:19