Near-real-time_ Alaskan Regional Seismic Network Monitoring with by xiangpeng


									                               Submitted to the Bulletin of the Seismological Society of America
                                                                             September 22, 2000

        Near-real-time, Alaskan Regional Seismic Network
              Monitoring with the Iceworm System
                            Kent G. Lindquist and Roger A. Hansen

                                     Geophysical Institute
                                 University of Alaska, Fairbanks
                                     Fairbanks, AK 99775

        We present a snapshot of Iceworm, the evolving, near-real-time seismic processing system
at the University of Alaska, Fairbanks. The core components of Iceworm, including database
management, data-handling, long-distance communication, and run-time execution control and
monitoring, are built on the Antelope Environmental Monitoring System from Boulder Real-Time
Technologies. Inside Antelope we embed the Earthworm system from the U.S. Geological Survey
to take advantage of digitizing, communication, picking, and location capabilities of Earthworm
(parallel to those of Antelope), as well as to provide support to run any of the Earthworm mod-
ules. In addition, tens of thousands of lines of new source code have been added to meet the full
scope of our regional network monitoring needs. We enumerate these needs, discussing Iceworm
system architecture, key features to the technology and new ideas introduced, and the growth and
evolution of the system to the extent that it illustrates establishing automatic processing for a
regional network. Principal features of Iceworm include relational database management of sys-
tem-setup information, quasi-static network information, incoming data, and processing results.
Data intake from diverse sources is handled in a unified manner. Participation in virtual seismo-
graph networks is supported. A framework for near-real-time processing of seismic data with
arbitrary algorithms is supported. Pager based alarm functions, analyst review and catalog pro-
duction, and distribution of information via email, fax, telephone calldown lists, interactive work-
station graphics, cell-phone paging, and world-wide-web are all integrated. We end with a
presentation of the Iceworm system performance from 1997, when we first began relying on it for
data acquisition and processing, to December of 1998.

       The goals of the Alaskan regional seismograph network include mitigating seismic, volca-

nic, and tsunami hazards, providing rapid information for disaster response, and studying the tec-

tonics of the Alaskan and Arctic solid earth. Through 1994, the prime component of the Alaska

Earthquake Information Center (AEIC) operational work was the production of catalogs of

regional earthquakes [e.g. Rowe et al., 1996], along with 24-hour, on-call analyst review of signif-

icant earthquakes and volcanic events. Advancing software and hardware technologies allow us

continually to better meet our scientific and monitoring needs. Recently, we have made intensive

efforts to bring the Alaskan network’s seismometers, telemetry, computers, and software up to the

state of the art [Lindquist and Hansen 1998, 2000]. In this work we focus on software develop-

ments at the Alaska Earthquake Information Center. Our collocation with the Alaska Volcano

Observatory, along with partially shared infrastructure and software systems, allows the AEIC

advances to contribute as well to seismological research and monitoring of the many active Alas-

kan volcanoes [e.g. Lindquist et al., 1996; Benoit et al., 1998].

       Principal among the software improvements possible is the ability to provide near-real-

time information on the location and magnitude of Alaskan earthquakes. In this context, “near-

real-time” means the time frame several seconds to minutes after the onset of a seismic event and

after the arrival of the elastic waves at the recording stations [Malone, 1996]. Above all, hazard

mitigation and disaster response drive the need for fast information about large earthquakes and

erupting volcanoes. For scientific purposes, efficient study of earth processes is promoted by eas-

ier and faster access to waveform data, by the ability to run experiments on real-time data streams,

and by the potential of determining additional complex measures automatically or at least with

less manual labor for the seismologists. Finally, the traditional role of regional networks in pro-

ducing earthquake catalogs is vastly simplified by automatic processing.

       The construction of an integrated software system for Alaskan automatic and interactive

processing of seismic signals encompasses a broad range of needs, many of these shared by other

regional networks. These needs include:

•   continuous data acquisition from diverse sources and with diverse telemetry technologies
•   robust long-distance waveform data exchange; participation in virtual seismograph networks
•   continuous waveform-data archiving
•   integrated processing of multiple incoming datatypes
•   real-time display of continuous data (e.g. helicorder-style plots)
•   continuous generation and monitoring of envelope functions, spectrograms, beams, etc.
•   automatic phase picking and event detection
•   automatic phase association and hypocenter location
•   array processing for heightened sensitivity and expanded geographic coverage
•   alarm notification of significant earthquakes (via pager, email, local graphical displays, web)
•   tools for duty-person response to alarm events
•   analyst review tools
•   extraction and storage of segmented data for detected earthquakes
•   catalog production tools
•   system-maintenance tools; run-time control; data dropout and status monitoring
•   interactive scientific analysis tools
•   easy retrieval of archived parametric catalog data, segmented and continuous waveforms
•   environment for software production for research projects and operational support

       The commonality of these needs to many regional networks is reflected in the literature

describing a number of stand-alone systems developed at several regional networks. In some

cases this commonality is discussed explicitly, for example in Maechling et al. [2000] on the

Southern California Seismic Network (SCSN) /TriNet. While a full review of regional network

systems is beyond the scope of this work [for one perspective, see Malone, 1999], cursory over-

view reveals two separate classes of systems. Some of the systems at other networks have been

tailored specifically to meet the needs of that regional network. These include the South Iceland

Lowlands (SIL) system [Bödvarsson et al. 1996, 1999], the Rapid Earthquake Data Integration

(REDI) project at Berkeley [Gee et al. 1996], The Northern Norway SEISNOR system [Havskov

et al. 1992], Terrascope [Hauksson et al. 1994], the ISAIAH system [Given, 1994], the Taiwanese

system [Wu et al. 1997], SUNWORM at the University of Washington [Malone, 1995], and the

Japanese Urgent Earthquake Detection and Alarm System [Nakamura 1994]. Additionally, sys-

tems for global near-real-time processing also contain instructive lessons for regional network

monitoring, such as the Intelligent Monitoring System [Bache et al., 1990] and the International

Deployment of Accelerometers (IDA) [Berger and Chavez 1997].

        In many cases the stand-alone systems at various networks have reinvented the wheel,

inspiring attempts at panacea solutions in the form of software toolboxes. Thus, in addition to the

network-specific systems mentioned above, several explicit attempts have been made to general-

ize part or all of the regional-network software problems for widespread adoption. This includes

Earthworm [Bittenbinder, 1994, Dietz et al. 1995] with an application example in Dietz et al.

[1994]; SeisNet [Ottemüller and Havskov 1998], with an application example in Ottemüller and

Havskov [1999]; and Antelope [BRTT 1998], with application examples in Al-Amri and Al-Amri

[1999] and von Seggern et al. [2000].

       Even with the toolboxes, however, all regional networks require at least some tailoring and

customization of a generic system [Hansen, 2000]. A ‘one size fits all’ strategy for regional net-

works is difficult because of variations in funding levels, mandated requirements for performance,

availability of technical-staff time, and existing deployed technology (including legacy infrastruc-

ture, computer resources, station types and digitizer types).

       The work presented here begins approximately with the emerging trend of building ‘tool-

boxes’ for regional network processing. Our development work has paralleled the evolution of

two major component toolboxes, the Earthworm system and the Antelope system. Our philosophy

has been to adopt as much as possible from these and other development efforts, filling out as nec-

essary with our own code to meet all the needs of our network. The themes of modular software,

packetized data, and message-passing, common to many seismic software systems, have allowed

us to glue together these software packages. We have built a system for the needs of the Alaskan

network, expanding the Earthworm and Antelope tools rather than assembling one more toolbox

or creating one more attempt at a panacea solution. Nevertheless, many of the lessons and much

of the software presented here should be applicable to other regional networks. One factor has

been that not all toolboxes are created equal: there are variations in the quality of implementation

and ease of use, all resulting in significant cost-of-ownership issues. As made clear by Harvey

[1999], cost/reliability design decisions in a system change when research mission objectives are

augmented by societally important monitoring objectives. These objectives expand software per-

formance requirements to include the “mission-critical” goals of high data recovery, minimal

manual intervention to keep the software running, and minimized data latencies.

        During this development, the need to preserve continuity of data flow and software func-

tionality to lab employees and external institutions, with limited staff and within realistic time and

monetary budgets, has limited our ability to make drastic software changes all at once, instead

forcing us to adopt a fairly conservative strategy of incremental progress. It has never been possi-

ble to simply turn off one system and switch to another. To some extent, opportunities for growth

have exceeded implementation speed. The picture we provide here of our current system as it

meets the above needs, together with the advances made in its development, is a snapshot of an

evolving system.

       A short history of our real-time system development experiences serves to illustrate the

relationship of a representative regional network to advances in acquisition and processing tech-

nology. From 1989 until 1997, earthquake analysis at the AEIC has been done with JADE, or

"Just Another Detector of Events," an event trigger that saves segmented data for candidate earth-

quakes without any further processing [Sonafrank et al., 1991]. All picking of arrival times and

determining the locations of earthquakes has been done interactively [Rowe et al., 1991] with the

computer programs Xpick [Robinson et al., 1991] and Hypoellipse [Lahr, 1989].

       In 1994, the Earthworm system [Bittenbinder, 1994] arrived as a PC for digitization, along

with Unix software for automatic picking and location. This introduced automatic extraction of

parametric data from the seismic wavefield. Output of phase picks and hypocenters was in sim-

plistic flat files. Earthworm traces its evolution to the Caltech Earthquake Detection and Record-

ing (CEDAR) system, the goal of which was to become a “clever dustman” [Johnson, 1979],

throwing away almost all data as quickly and efficiently as possible, saving only the most essen-

tial for earthquake detection. While the original goal of the Earthworm system simply as a

replacement for the USGS Real-Time Picker [Dietz et al., 1993] made this acceptable, the disad-

vantage for regional networks not running the CalTech-USGS Processing System (CUSP) [Dollar

and Walter 1995] alongside Earthworm was that with the Earthworm system as delivered, wave-

form data were lost. Also, though the parametric data were saved, there was no way to look at the

data from which the picks and hypocenters were derived, not to mention re-analyze or study the

waveform data.

       Simultaneously with the arrival of Earthworm, an initiative was underway at AEIC to link

Sybase [a product of Sybase, Inc.] into the archiving and analyst-review environment at the AEIC

[Anderson et al. 1994]. This introduced relational-database management to the AEIC, a develop-

ment effort relying at the time on the Seismic Unified Data System, or SUDS [Ward and Ander-

son 1994]. However the Datascope package [Quinlan, 1994] was available and already complete,

suggesting its adoption instead, thus providing complete, database-compatible seismic analysis

tools. This also lead naturally to the next step, integration of Datascope with Earthworm. In 1995,

we established output of Earthworm parametric data into the Datascope Relational Database

Management System (RDBMS) [Johnson et al., 1995; Lindquist and Hansen 1995].

       Our next step was to address the loss of waveform data. We present a few extra technical

details here since, though they illuminate Iceworm development, they have been superseded in the

current system and thus will not be described in the main “System Architecture” section below. In

order to support our first integration of Earthworm and Datascope [Johnson et al., 1995], in 1995

Will Kohler wrote the first wave_server, which collected, buffered, and distributed packets of

multiplexed trace-data from the old, multiplexed Earthworm digitizer. One of our first discoveries

of the Iceworm development was that it was computationally inexpensive and convenient to

demultiplex data as soon as possible. Also, we in Alaska recognized our need to incorporate data

streams from digital data sources as well, so we designed the predecessor to the current Earth-

worm tracebuf format for demultiplexed, packetized, continuous waveform data. (The current ver-

sion, identical in the off-the-shelf Earthworm system and in the Alaskan system, differs from that

original design only by a data quality field and two bytes of padding, which were included at the

request of the Earthworm team). This common data format allowed us in late 1995 and early 1996

to write an on-the-fly demultiplexer called ad_demux. The ad_demux module was superseded by

our adsend2orb program when the Datascope orbserver software became available for robust

data-handling. Orbserver (often referred to by the shorthand term “ORB”, meaning “Object Ring

Buffer”) is a client-server based program to store, and retrieve upon request, raw data packets in a

disk-based circular data buffer. Adsend2orb allowed direct communication from the Earthworm

digitizer to the Datascope orbserver. When the Earthworm team redesigned their digitizer to cre-

ate and send the now-standard demultiplexed data packets from the start, we expanded

adsend2orb to receive them via the Earthworm ring2coax/coaxtoring User Datagram Protocol

(UDP) communication mechanism. In addition, we segmented the wave-server ring-buffer ‘tank’

into mini-tanks, putting the now-demultiplexed packets into a tank for each channel. This allowed

segmented data storage via css_report, the central Iceworm databasing utility to be described in

more detail below. Our next task was to write a stand-alone program to retrieve traces from the

wave-server at regular intervals and write them to continuous waveform data files of a user-speci-

fied format. This module we wrote in 1996, calling it the archiver. This exercise took six months

and five full rewrites to get correct. Writing the archiver module involved several traps, one of

which was to follow very carefully the maxim, for asynchronous processing, that one should rely

on data synchronization rather than process synchronization [Kleinman et al., 1996]. This

archiver compartmentalized all the data-output routines into a single subroutine-hook, allowing

easy switching amongst several output formats.

       Adding the components we required to make a full-fledged system produced Iceworm, a

massive, successful development effort combining Earthworm, freeware Datascope, and a wealth

of new software [Lindquist, 1998]. The Iceworm system incorporated several new advances: first,

all parametric information for located hypocenters, segmented waveforms corresponding to those

events, continuous waveforms from the entire network, and quasi-static information about net-

work configuration and system setup are stored automatically in relational databases. The use of

relational databases in seismology is not new [e.g. Berger et al., 1984]. For years they have been

applied to global processing by the Air Force Technical Applications Center (AFTAC), the Nor-

wegian Seismic Array (NORSAR), and the International Data Center (IDC) of the Center for

Monitoring Research (CMR), among others. However, the use of relational databases in real-time

regional network processing is just now emerging [see also Kjartansson, 1996]. Second, we built

an architecture around a demultiplexed waveform-data format that allowed the incorporation of

analog and digital data from diverse sources into the same processing stream. Third, we created a

framework in which to run arbitrary algorithms on real-time data streams. In the current work we

refer to three classes of software programs. Earthworm modules are unmodified, off-the-shelf

Earthworm programs as distributed by the U.S. Geological Survey (using Earthworm version 5.0

at the time of writing). Antelope programs are the unmodified, off-the-shelf versions (using Ante-

lope version 4.2u at the time of writing) distributed by Boulder Real-Time Technologies, Inc.

(BRTT). Iceworm programs are programs written at the University of Alaska to be completely

consistent with the Earthworm distribution, in many cases also drawing on the Antelope toolkits

from BRTT; or University of Alaska software independent of one or both of these packages.

Throughout our efforts we have endeavored to stay completely compatible with off-the-shelf

Earthworm and Antelope modules. While we did not change Earthworm source-code, we did add

some conveniences locally, such as a makefile structure which allowed the distribution to be

recompiled in one pass. We also linked all the Earthworm function calls appearing in multiple

modules into both static and shared-object libraries. The Iceworm system was installed in Febru-

ary, 1997 as the main acquisition system for AEIC, following the death of the aging Masscomp

acquisition computers.

       The initial Iceworm development effort was followed by a steady evolution. One major

component of this was the transition from the freeware Datascope software to the commercially-

produced Antelope system. Antelope is a system of software modules that acquires, transports,

buffers, processes, archives, and distributes environmental monitoring information, in our case

seismic information [BRTT 1998]. While Antelope shares some similar functionality with the

predecessor Datascope package, many new functions were added, and much of the underlying

code was rewritten for vastly improved performance and fully-dynamic data handling [Harvey

1999]. One major step in the evolution of Iceworm was the replacement of the Iceworm archiver

utility with the Antelope orb2db program when it became available in 1997. As described in more

detail below, we have relied heavily on the Antelope package due our positive experiences with its

flexibility, power, and reliability.

        The work described here was conducted at the Alaska Earthquake Information Center

(AEIC). The AEIC monitors earthquakes in Alaska, provides rapid information on felt earth-

quakes, and disseminates information about earthquakes and seismic hazards to government offi-

cials, the media, the public, and the earth-science community worldwide. Established in 1988, the

AEIC is operated by the Geophysical Institute of the University of Alaska, Fairbanks, coopera-

tively through a Memorandum of Understanding (MOU) with the United States Geological Sur-

vey. The main center of operations is located at the Geophysical Institute in Fairbanks, with the

office of the Alaska State Seismologist.

        Between 1990 and 1998, the AEIC has located approximately 46,500 earthquakes in

Alaska and western Canada, or about 6,000 per year. The full area of responsibility of the AEIC

spans the entirety of the Alaskan interior, the Alaskan panhandle in the Southeast, and the Aleu-

tian Islands. Due to the AEIC seismograph station distribution, most of these detected earth-

quakes have been in a “core” area in central and southern Alaska. We must emphasize the great

extent of the entire Alaskan region compared to most single regional networks in the world. This

makes necessary a large, hybrid network and favors the use of joint network and array processing

to cover the entire region.

       The AEIC seismograph network currently incorporates more than 250 stations, including

15 stations operated by the Alaska Tsunami Warning Center (ATWC) in Palmer, Alaska. Most of

the stations are located in central and southern Alaska (Figure 1). The majority of the channels of

data recorded at the AEIC are from short-period, vertical-component, analog stations. Eight of

these short-period analog stations are three-component stations. There are also several high-

dynamic-range, broadband stations operating in Alaska (Lindquist and Hansen 1998), including

stations installed as part of the Princeton Earth Physics Project (PEPP) [Nolet 1995, Hansen et al.

1997]. The stations in Fairbanks (COLA), at Adak Island (ADK) and the station on Kodiak Island

installed in June 1997 (KDAK) are all IRIS stations with the Kodiak station contributing to the

auxiliary network of the GSETT3 (“Group of Scientific Experts Technical Test”). The Data Pro-

cessor unit for the COLA station is located directly in the University of Alaska, Fairbanks (UAF)

seismology lab, allowing convenient near-real-time access to the data streams over lab ethernet.

The IRIS COLA station provides several near-real-time channels each from a Teledyne Geotech

KS-54000 instrument in a 425-foot borehole and from a Streckeisen STS-2 seismometer in a vault

[Townsend, 1996]. Another existing broadband station is an STS-2 seismometer with a Quanterra

datalogger operated by the ATWC in Palmer. Several broadband sites are being installed by the

recently funded Federal/State tsunami-hazard mitigation initiative, called CREST (“Consolidated

Reporting of EarthquakeS and Tsunami”). Funding from the Senate appropriations committee

through NOAA and the USGS is establishing communication links and high dynamic range

broadband stations for this tsunami initiative throughout the Pacific Northwest. The remaining

stations are borehole arrays run by UAF, plus 20 closely spaced digital strong-motion seismo-

graphs recently installed in Anchorage.

       Analog-to-digital conversion of the short-period analog stations in the Alaskan network is

done with the Earthworm NT digitizer, an IBM-compatible Pentium personal computer control-

ling a National Instruments AT-MIO-16F-5 digital data acquisition board. Four AMUX-64T mul-

tiplexers offer 64 channels each, for a total of 256 channels attached to the digitizing board. The

sample rate is currently set at 100 samples per second. In actuality, we have two identical multi-

plexers and digitizers, giving us a full backup to the operational system. Except as mentioned oth-

erwise above, our broadband sensors are Guralp CMG-40T and CMG-3T seismometers with

Guralp digitizers at the field sites. These digitizers are connected via serial lines to personal com-

puters running the SCREAM software from Guralp Inc., from which data are collected into our

processing system as discussed below. In some cases the Guralp digitizer and the PC running

SCREAM software are within feet of each other; in other cases an intervening leased phone-line

conveys this serial data stream. In one case a Lantronix MSS-1 Micro-Serial Server is used to tun-

nel the serial connection through the internet. Our PEPP stations are similarly collected with the

SCREAM software, though in this case the sensor is the Guralp PEPPV seismometer.

       Most of the seismic processing is done on a network of Sun workstations. The principal

workstation is a Sun Ultra-2 with 768 MB of onboard Random Access Memory (RAM) and a 128

GB RAID box for about 40 days of continuous waveform-data storage. This corresponds to con-

tinuous waveform data flow of over 4 Gigabytes per day uncompressed, which reduces to about 2

Gigabytes per day compressed. Our backup system is a Sun Ultra-60 with 768 MB of onboard

RAM and 36 GB of hard-disk storage. Each of these machines is sufficient to process our entire

network, however for convenience we use several other Sun workstations and PC computers for

data collection and distribution, as described below.

                                 System Architecture
       We divide the system architecture into several separate categories: infrastructure, continu-

ous waveform data flow, parametric processing, the interactive lab software environment, and dis-

tribution of results. Infrastructure includes: packetized data, format conversions, relational

database storage and management of system configuration, quasi-static information such as sta-

tion location and response information, collected waveform data, and processing results.

       Management of an entire modular, near-real-time seismic system usually requires some

form of executive program. Antelope has a program called rtexec; Earthworm uses the startstop

program. The former allows turnkey restart of the system after computer crashes or reboots. We

run an embedded Earthworm inside the Antelope rtexec context: all Earthworm components are

off-the-shelf Earthworm modules though their starting, stopping, and restarting is controlled by

rtexec. A startstop program runs as a slave merely to create the Earthworm shared-memory rings

for Earthworm module communication.

       Additional infrastructure components include a parameter file mechanism--we use both

that of Antelope and that of Earthworm. The Antelope parameter-file mechanism differs from that

of Earthworm in providing much more functionality both in parameter file structure and support

for central master parameter files with local user modifications. Importantly for Iceworm develop-

ment, Antelope also provides a comprehensive set of programming libraries in multiple languages

for the creation and customization of seismic and real-time monitoring tools. In addition to the

Antelope command-line interface, which allows shell-scripting, languages supported by the Ante-

lope toolkit include C, Fortran, Perl, Tcl/Tk, and now MATLAB [Lindquist, 2000; MATLAB

itself is a product of The Math Works, Inc.]. We have used these features and languages to vastly

simplify the construction of specialized regional-network software needed at the AEIC.

       A core decision on Iceworm infrastructure has been to manage all input, output, and much

run-time configuration information with relational databases. While virtually any accumulation of

data can be considered a “database” in some sense, a relational database is organized on the

premise that each piece of information should be stored in only one place. Relational databases

allow data normalization, a structuring of the database to avoid update anomalies: single pieces of

information such as station locations or earthquake hypocenters can be updated without affecting

other parts of the database [Braithwaite, 1991]. Relationships between pieces of information are

specified explicitly such that pieces of information can be grouped in multiple, sensible ways to

suit the needs of the moment. For example, hypocenter locations at some times need to be associ-

ated with the phase-arrival information from which they are derived; at other times need to be

associated with one or several computed magnitudes or moment tensors; and at some times need

to be grouped into sets of competing hypotheses for the location of a single event. Similarly, seis-

mograph site information is associated at various times with information on the characteristics of

deployed instruments as they change through time, associations of sites with different networks,

and groupings of different sites considered together for various processing tasks. The efficiency

gained with relational database storage can drastically increase productivity for catalogs and sci-

entific results, especially with hundreds of earthquakes per week and gigabytes of waveforms per


       The output databases from the Iceworm system are in the standard Center for Seismic

Studies CSS3.0 schema [Anderson et al., 1990]. In order to accommodate the storage of informa-

tion internal to the Iceworm system, however, we have developed an expanded schema, based on

CSS3.0, which is used for Iceworm processing. No modifications have been made to the original

CSS3.0 tables; the changes are entirely additions and extensions. None of the added tables and

fields are intended to appear in the final output databases.They merely support the running auto-

matic system.

       Eight tables have been added to the CSS3.0 schema to form the Iceworm schema. The pins

table associates an integer pin-number, a “social-security number” for each data stream, with each

station and channel name. This is the same as the Earthworm pin number. An additional field in

this table describes whether to save the data stream for processing, useful for turning off tempo-

rarily non-working stations. There is no historical tracking of pin numbers as their associations

change or as station and channel names and characteristics change. They are simply a transient

bookkeeping device for the currently running system. The timecorr table gives the amount of

communication delay (up to 0.27 seconds for a satellite hop) to be adjusted into the timestamps of

the raw data before they reach the trace-ring (i.e. the Earthworm shared-memory ring-buffer for

data communications). Four tables make possible the intake of data from various data sources.

The ewanalog table specifies the wiring of the Earthworm analog to digital converter. This table is

watched dynamically so as wiring changes are made in the lab, corresponding database changes

can re-assign station names in the digitization without shutting down the whole system. The win

table supports data exchange with Japan, associating the Japanese WIN-format station code [K.

Katsumata, pers. comm. 1996] for each station with the Alaskan station names. The tables reftek-

das and reftekchan associate Reftek datalogger serial numbers and channel numbers with the seis-

mograph station and channel names to which they correspond. Importantly, a picker table lists all

the parameters needed to custom-tune the phase picking for each station-channel. An additional

field indicates whether each station-channel should be picked or ignored. Finally, the eva table

gives parameters for the Alaskan Earthquake/Volcano Alarm (EVA). The EVA alarm is a rough

measure of the energy content of a seismogram, used to set off alarms during potential large earth-

quakes and volcanic eruptions. The eva table specifies the stations and channels to watch, and the

parameters for triggering the alarm for each station or combination of stations.

       Continuous waveform data flow encompasses diverse telemetry strategies, input formats,

and geographic distribution for data intake. Inside the system, continuous data must be available

both as continuous near-real-time streams and in small segments for recent time periods. Continu-

ous data also needs to be available for geographically diverse export, so our regional network can

participate in virtual seismograph networks [e.g. Harvey et al., 1998]. Finally, at least in our

regional network, continuous data needs to be archived for some length of time. This serves more

purpose than simple data storage. Reassembling data-packets into seismologically useful wave-

form segments can require extensive secretarial work on the part of the program doing the work. It

makes sense to do packet reassembly once rather than many times, via one piece of well-designed

archiving software, allowing all but the most time-critical continuous-data processing tasks to run

off the database rather than the near-real-time packet stream.

       Regarding the availability of continuous data both in near-real-time continuing streams,

and in segments of recent data, Earthworm has two completely separate mechanisms: the shared-

memory ring for streams of data, and the wave-server client-server protocol for recent segments

of data. In Antelope there is one integrated interface: both modes of data access are through the

orbserver program. For distributing streams of near-real-time waveforms, data packets are passed

around by Earthworm with three completely different mechanisms depending on distance and net-

work topology: for communication on the same machine, sharing of data packets is through the

shared-memory ring-buffer and Earthworm transport libraries. For communication amongst

machines in the same lab, data packets are converted to UDP packets and sent over local intranet

to reception and reconversion modules. For longer-distance internet communication, a third, com-

pletely different protocol is used, that of the import_generic/export_generic mechanism. In addi-

tion, data compression, if desired in this third mechanism, is implemented in Earthworm as a

separate set of message types and an additional tier of compression/decompression modules. In

Antelope, the entirety of these data-communication problems are handled transparently by a sin-

gle mechanism. This has made the setup of our complex Alaskan data-flow system much more

intuitive and straightforward. Finally, the orbserver mechanisms are fully dynamic. All details of

the orb connection parameters are negotiated and handled silently by the orb libraries, whereas in

Earthworm many things such as heartbeat intervals must be set and coordinated by hand.

Although the Iceworm system has available to it all the above means of communication, and all in

fact are used in some parts of the system, where possible the above technical advantages of Ante-

lope have inspired the choice of Antelope ORB data communication for all critical data paths both

inside and outside Iceworm. Nevertheless, we have complete ability to pass data back and forth

between Antelope and Earthworm.

         Our current processing system acquires continuous waveform data with a multitude of

technologies and packet formats [Figure 2]. As indicated in this figure, data from many formats

and diverse locations are integrated into centralized orb processing and can also be seamlessly

passed between the Antelope orb and Earthworm. The orb2eworm and eworm2orb conversion

utilities account for the difference in data-handling strategies between the two packets, converting

all data to the Earthworm common trace-buffer format when passing to Earthworm and on the

Antelope side using the Antelope packet library, which transparently handles diverse input packet


         Several new programs have been written to support this data handling. These include

guralp2orb, rtp2orb, ida2orb, arrays2orb, win2orb, adsend2orb, orb2eworm, and eworm2orb.

Adsend2orb dynamically watches the site database for changes in analog station assignment,

allowing wiring changes to be made without stopping data for the entire network. Both

adsend2orb and eworm2orb have the ability to insert timestamp-corrections to adjust for satellite

or other communication delays. This supports our underlying decision to have all data correctly

timestamped before they reach the automatic processing and databasing levels. Guralp2orb com-

municates with the Guralp, Inc. SCREAM program, acquiring data over internet from Guralp dig-

itizers. Rtp2orb communicates via the Reftek Protocol with Reftek digitizers. Ida2orb

communicates with systems running the International Deployment of Accelerometers (IDA)

Near-Real-Time System (NRTS) [Berger and Chavez 1997]. Arrays2orb handles Alaskan array

data. Win2orb handles data in the Japanese WIN format. With the exception of the connections to

embedded Earthworm systems via adsend2orb, eworm2orb, and orb2eworm, all data is commu-

nicated amongst computers (i.e. from the data concentrators described below, to the processing

machines) with the Antelope orb2orb program.

       In addition to the multitude of input formats, we collect data from a geographically

extended region [Figure 3]. This includes an analog network that is concentrated in the interior

and south-central parts of Alaska, along the Alaska Peninsula, and on some of the Aleutian

Islands [see Figure 1]. These stations are all brought into the lab via analog telemetry and digi-

tized in Fairbanks, or brought into a concentration point in Anchorage where they are digitized

and sent to Fairbanks via the orb2orb program. The data-concentration point in Anchorage eases

logistics for stations nearer to Anchorage than Fairbanks, as well as reducing the cost of telemetry

by allowing shorter distances for leased phone lines. Data from several new Guralp broadband

stations installed as part of the Federal/State Tsunami Hazard Mitigation Initiative (the CREST

program), as well as through the Princeton Earth Physics Project, are digitized in the field, then

sent to Fairbanks via serial line or internet. The IRIS/GSN stations at College, Alaska (COLA)

and Adak Island (ADK) are brought into the lab with the Live Internet Seismic Server (LISS) pro-

tocol, using the liss2orb program. The Kodiak Island IRIS/IDA station (KDAK) is brought into

Fairbanks by a Near-Real-Time System (NRTS) link with the program ida2orb. Finally, data col-

lected through the Alaska Tsunami Warning Center in Palmer, Alaska are transmitted to Fair-

banks with the Earthworm import/export protocol over a leased-line intranet connection.

       Our Alaskan data-flow is expanded by our participation in several virtual seismograph net-

works [Figure 4].These include data exchange with the Institute of Seismology and Volcanology

of Hokkaido University, Japan; import of western Canadian data and export of Eastern Alaskan

data with the Pacific Geoscience Center in Victoria, British Columbia; and exchange of seismic

and infrasonic data with the Infrasound Laboratory in Kona, Hawaii. A subset of our network data

are sent to colleagues at the University of California, San Diego, from whom we also obtain near-

real-time data from the Kyrghyz array for research purposes. All of these exchanges are accom-

plished with the orb2orb program. In addition, we submit data directly to Golden, Colorado with

a new program orb2vdl, which reads data off an orbserver and sends them in Virtual Data Logger

(VDL) format to the U.S. National Seismograph Network.

       That exchange of waveform data is augmented by our exchange of parametric data in a

variety of ways [Figure 5]. Reviewed hypocenters are received via email from the Alaska Tsu-

nami Warning Center and the National Earthquake Information Center (NEIC). Automatic hypo-

center solutions are also received from NEIC. A web-page link for Alaska is included for

participation in the Community Internet Intensity Project [Wald et al. 1999]. Automatic and

reviewed hypocentral data are distributed from the Alaska Earthquake Information Center via

email, as well as other alarm-event response mechanisms as discussed below.

       Our data collection and processing are spread across a variety of computers, including

three independent Sun workstations for an Operational system, a Backup system, and a Develop-

ment system; three Earthworm digitizers, one backup, one operations, and one at the Anchorage

data collection node; two Sun data-concentrator workstations plus a number of PC’s for data col-

lection. For details of the computer setup, refer to Figure 6.

       All continuous data are stored via the Antelope orb2db program. This replaces our origi-

nal strategy, which relied on the ad_demux, wave_server and archiver modules for Earthworm.

[Of these only the wave_server is now running, to support local-magnitude calculation]. Cur-

rently we have configured orb2db to store the raw waveform data in miniSEED format [Ahern et

al., 1993], with summary information and access for those files in CSS3.0 wfdisc tables [Ander-

son et al., 1990]. Given our current disk-space, we have 40 days of continuous data online for our

operational system, four days on our backup system, and three on our development system.

       Finally, spread across this computer network is our processing system, the simplified

architecture of which is shown in Figure 7. For representative purposes only one orb is shown in

this figure, however in practice we run multiple orbs on several different machines for redundancy

and load-sharing, relying on the orb2orb program to manage data-flow amongst different orbs.

Automatic parametric processing includes phase detection, phase association, and magnitude esti-

mation. This processing is done along two parallel tracks. One is the cascade of Antelope near-

real-time earthquake detection, triggering, and location tools. The other, the stream we use for

analysis, consists of the Iceworm icepick program and the standard Earthworm binder_ew associ-

ator program. Icepick is essentially a version of the Rex Allen phase-picker [Allen 1978, 1982]

taken from Earthworm. The original Earthworm version [Dietz et al. 1993] ran on multiplexed

analog data. Icepick asynchronously processes all incoming channels, with relational database

specification of picking parameters that are individually tunable for each seismic channel. At

present we rely on the Earthworm picker and binder_ew hypocenters with which to build our

main automatic catalogs, both to maintain historical continuity and because of the suitability of

the Earthworm picker to much of our short-period data plus the satisfactory performance of the

Earthworm binder_ew program within our core network. Future efforts will blend results from

multiple associators, allowing comparison of results and creating an automatic catalog for analyst

review that catches as many Alaskan earthquakes as possible, however this is a larger effort that

merits presentation in a separate work.

       Two programs are responsible for saving the output of the picker and associator into rela-

tional databases of parametric data. The first, savepicks, runs on the side, merely to save all auto-

matic picks for later network tuning and review. The main module is called css_report. This

module saves all hypocenters created by binder_ew, along with their associated picks. As always

in near-real-time seismic event detection, there is a trade-off in reporting created by the partially

conflicting goals of wanting information on events as fast as possible, wanting the best possible

information at any given time, and wanting the final, conclusive output of the automatic system to

be as accurate as possible. The Earthworm binder_ew module continually updates hypocentral

solutions as new picks arrive that change the results. Our approach in css_report was to adopt the

strategy of the Earthworm report module, which was to wait to report any binder_ew solution

until it had stabilized for 60 seconds (actually a tunable parameter); but also to make an immedi-

ate report of anything that started off with more than 25 contributing picks. In addition to saving

hypocenters to a database, css_report optionally reports the database rows also to an orbserver,

allowing many near-real-time processing tasks to be triggered with the well-engineered, event-

driven orb communications.

       The css_report module interacts with a third new Iceworm module, called local_mag,

which automatically computes synthetic Wood-Anderson seismograms, deriving the local magni-

tude of Alaskan regional earthquakes.

       The interactive lab software environment around the Iceworm system provides a number

of capabilities. Our alarm-response and analyst-review of earthquakes is performed on the contin-

uous waveform data using the Antelope dbpick and dbloc2 programs. Processing off the continu-

ous data is convenient for studying late arriving phases as well as avoiding problems with

processing data that have been segmented according to an incorrect automatic location. Seg-

mented data are excerpted from the continuous data stream after analyst-review has taken place.

The convenience of relational-database storage has allowed us to keep 13 years of earthquake cat-

alogs with their segmented waveforms online and easily accessible (retroactive reformatting

efforts allowed us to produce a consistently-formatted joint catalog of pre-Iceworm and Iceworm-

era earthquake locations and waveforms). Using the Antelope analyst review tools together with

custom tools built on the Antelope programming libraries, we can produce catalogs directly from

analyst-reviewed Iceworm automatic catalogs. Finally, interactive computation and research with

incoming data and online catalogs is aided by a new Antelope Toolbox for Matlab [Lindquist,

2000]. Also, Antelope data-format conversion utilities allow processing with other scientific tool-

boxes, for example the Seismic Analysis Code (SAC) [Tapley and Tull 1990].

       Additional custom-built tools allow streamlined response to alarm events (large earth-

quakes and felt earthquakes). These software development efforts have been described in separate

work [Lindquist and Hansen 1999a], however a short overview is appropriate here. Automatically

located hypocenters are available both through continuously updating relational databases and as

packets distributed over internet via orb. A graphical utility called wormwatch connects to either

one of these sources to display the most recently located events. This utility shows a background

picture which is a cloud-free satellite image of the state of Alaska, with earthquake locations and

their magnitudes superposed as small symbols. The utility is interactive, allowing on-duty ana-

lysts to bring individual earthquakes into the Antelope picking and location software. A suite of

C, Perl, and TCL/Tk programs developed in Alaska with the Antelope software development

libraries allows efficient collection of felt reports, followed by dissemination of information

releases via FAX, automatically customized telephone calldown lists, email, cell-phone email

notification, AEIC voice-mail message updates, and the world-wide-web. Also, the efforts we

have underway to bring array-processing online for expanded sensitivity and geographic coverage

of Alaskan and Arctic earthquake monitoring is described in separate work [Hansen and

Lindquist 1996, Lindquist et al. 1999b].

       Finally, we run a number of Earthworm support-utility functions such as the diskmgr and

statmgr for disk-space and module-status monitoring, and, alarmmgr and mailfeeder for part of

our alarm and email distribution functions. An accompanying host of Antelope-based programs

performs database cleanup and dataflow-monitoring functions.

                          System Performance Results
       We review the performance of the Iceworm system over a two-year period in order to illus-

trate its performance relative to design goals. The issues addressed include completeness of the

automatically-compiled catalog as compared to the analyst-reviewed catalog; accuracy of auto-

matic hypocentral locations for hazard-response and for informing the public; and accuracy of

automatic magnitudes for hazard-response guidance and for informing the public.

       Between January 1, 1997 and December 31, 1998, the Alaska Earthquake Information

Center catalogued 11,231 earthquakes [Figure 8]. The event detection for these earthquakes was

provided by numerous inputs, predominantly the Earthworm associator binder_ew but also by the

National Earthquake Information Center and by a PC-based system [Lee, 1989]. As a retroactive

measure of the Earthworm associator performance, we compared the output of our Earthworm

operational system to the completed AEIC catalogs partially based on the Earthworm output. For

each AEIC catalog hypocenter, we looked for an event in the Iceworm catalog within 300 seconds

origin time and 3 degrees epicentral distance, discovering 8178 such matches, with residuals as

shown in Figure 9. These association criteria are deliberately larger than reasonable, allowing the

procedure to catch a few bad matches and possible misassociations for the sake of catching all real

associations and applying further subsetting.

        Choosing an origin-time-residual cutoff of 50 seconds and a somewhat arbitrary distance-

residual cutoff of 2.5 degrees leaves 7802 AEIC catalog hypocenters which were detected and

located by the Iceworm operational system. These deliberately lenient search criteria cast a wide

net, respecting the observed fact that binder_ew will occasionally detect an earthquake without

locating it well.The majority of these residuals are under 0.1 degrees in distance and 5 seconds in

origin time: [Figure 10].

        Of the events missed by the operational (main) Iceworm system, 243 were located within

50 seconds/2.5 degrees by the backup Iceworm system, most probably during down-times of the

operational system. Of the remaining, 398 AEIC events were outside the Earthworm binder_ew-

module’s location grid of 54˚ to 67 ˚ North and 163˚ to 130 ˚ West. This leaves 2788 events, or

25% of the total, during the two-year period which are in the AEIC catalog but do not have loca-

tions that are believable even for triggering with an incorrect location (50 second origin-time /2.5

degree epicentral location maximum residuals).

       For comparison, we have repeated the above analysis for the time-period of August, 2000

in order to see if the currently running system is missing as many earthquakes. For this period

there are 371 events in the AEIC catalog, of which 32 events are inside the binder_ew location

grid and have been missed by both operational and backup systems. In other words, the percent-

age of cataloged earthquakes that should have been located by Iceworm but weren’t has dropped

from 25% to 9%. We tentatively attribute this mainly to changes in which stations are being

picked. In truth, however, the comparison between 1997-1998 performance and August, 2000 per-

formance is clouded by the discontinued reliance on the PC-based backup triggering system,

replaced by hand-scanning of selected continuous records by analysts for missed earthquakes.

The improvement in performance is most likely real, however, since the number of detected

events per month has not declined between 1997 and August, 2000.

       Returning to the original two-year time-period of study, the magnitudes of 6188 events

calculated by both AEIC and by Iceworm for hypocenters located by both are shown in the scat-

ter-plot in Figure 11. This figure plots the magnitude of each event as determined automatically

by Iceworm against the magnitude of the same event as determined by AEIC analysts. The bulk of

the earthquakes are clustered around the line of equal automatic and manual magnitudes, indicat-

ing acceptable performance of the automatic magnitude calculator for hazard-response purposes.

For earthquakes in the range of ML 0 to 2 the Iceworm automatic magnitude calculator, when it

errs, tends to err above the real magnitude which we tentatively attribute to noise in the short-

period analog data.

          Distance, azimuth from a convenient reference point of station MCK (McKinley National

Park, in the interior of Alaska), and depth residuals are shown in Figure 12. This figure shows that

while Earthworm binder_ew solutions are often in the right ballpark, there is enough variation

that human analyst review is necessary before releasing our earthquake solutions to the general


          During the time period of study, the AEIC made 264 earthquake-information releases for

significant and/or felt events. Of these, 172 were located by the main Iceworm system and another

3 were located by the backup Iceworm system, leaving 89 events missed or mislocated. 54 of the

missed events were outside the Earthworm binder_ew association grid. Of the remaining 35, Fig-

ure 13 shows that many are on the edges of the grid away from the dense parts of the Alaskan net-

work of stations. In sum, very few earthquakes have been missed, given the limitations of

binder_ew. Magnitude residuals for the alarm events are quite good, as shown in Figure 14. The

automatic magnitudes for alarm events are good enough to give a realistic general estimate to a

duty-person analyst of the severity of the earthquake.

          Distance, Azimuth from MCK, and depth residuals for the alarm events are shown in Fig-

ure 15. Comparing to the previous figure for the entirety of the Iceworm catalog, the Earthworm

binder_ew solutions for our alarm events show much lower scatter, making the automatic solu-

tions immediately useful to hazard-response personnel. Thus we allow those solutions to be

released, with disclaimers, to appropriate members of the seismological and emergency-response


                         Conclusion: Future directions

       Establishment of the Iceworm system has laid the base for continual improvement in near-

real-time seismology at the AEIC, governed by a steady evolution that provides continuity of ser-

vice for core functions. We have chosen as many pieces from available systems as possible and

brought them together with local developments to best address the needs of distributed regional

network monitoring in Alaska. Among the directions for evolution: first, we have prepared the

way for better monitoring of Aleutian seismicity through joint network and array processing tech-

niques, since the new framework will allow real-time beam-forming of data streams. Second,

work is underway to bring continuous data from the Anchorage strong-motion instruments into

the real-time processing system. In fact, through the strong-motion stations installed as part of the

CREST program, plus several limited-time experiments, we have already started an integrated

weak/strong motion relational database of archived waveform data. Third, expansion of the gener-

alized beam-forming phase-association techniques will allow, through international cooperation

and exchange of real-time parametric information, the study of seismicity of the entire Arctic

region. This initiative is called the COoperative Arctic Seismology Program, or COASP [Hansen

and Lindquist 1996]. Fourth, adding near-real-time moment-tensor estimation from Alaskan

broadband stations will provide more information for emergency response for earthquakes as well

as reducing false alarms in tsunami hazard mitigation. Fifth, we are systematically eliminating

single-points-of-failure from the hardware and software system. Finally, since the final answer to

the phase-stacking issue is still at large, we plan a ‘big-hopper’ approach to the intake of auto-

matic location data from all associators and sources possible so we are assured of identifying and

locating as many Alaskan events as possible. All source code developed by the University of

Alaska for Iceworm is available through the world-wide-web address


       No system of this size would be possible without the teamwork of the entire AEIC staff. In

particular we acknowledge the computer-system administration work of Mitch Robinson. Kevin

Engle provided valuable programming help on the wormwatch utility. Dan Quinlan and Danny Har-

vey of Boulder Real Time Technologies, and Frank Vernon have provided invaluable software and

network advice. We thank the USGS Earthworm team for their software contributions, and Dave

Oppenheimer of the USGS for providing an initial PC-based Earthworm system in 1994. David

Chavez collaborated on the ida2orb and icepick programs, Robert Banfill on the rtp2orb and

arrays2orb programs, and Murray McGowan of Guralp, Inc. on the guralp2orb program. Dave

Ketchum wrote the vast majority of the orb2vdl program. Funding for this work was provided by

the State of Alaska; NSF grant number EAR93-16337 for part of the database integration; and

IRIS sub-award number 218. Implementation of the automatic local-magnitude routines was

aided by Suzanne Floyd, working under the Research Experience for Undergraduates program,

NSF grant number EAR95-31601. Some figures in this paper were generated with Generic Map-

ping Tools [Wessel and Smith 1991]. Finally, the authors would like to acknowledge the valuable

feedback from many discussions with regional network colleagues over the past several years.

       Ahern, T.K., R. Buland, and S. Halbert (1993). Standard for the Exchange of Earthquake

Data: Reference Manual, SEED Format Version 2.3, February, 1993. Seattle: Incorporated

Research Institutions for Seismology, 203pp.

       Al-Amri, M.S. and A.M. Al-Amri (1999). Configuration of the seismographic networks in

Saudi Arabia, Seis. Res. Lett. 70, 322-331.

       Allen, R.V. (1978). Automatic Earthquake Recognition and Timing from Single Traces,

Bull. Seis. Soc. Am. 68, 1521-1532.

       Allen, R. (1982). Automatic Phase Pickers: Their present use and future prospects, Bull.

Seis. Soc. Am. 72, S225-S242.

       Anderson, J., W.E. Farrell, K. Garcia, J. Given, and H. Swanger (1990). Center for Seis-

mic Studies Version 3 Database: Schema Reference Manual, Science Applications International

Corporation, Arlington, Virginia, Technical Report C90-01, 61pp.

       Anderson, M.P., M.R. Robinson, T.M. Jiang, G.H.C. Sonafrank, and P.L. Ward (1994).

Alaska Seismic Network Database Management and Analysis Software at the University of

Alaska Geophysical Institute, Seis. Res. Lett. 65, 51.

       Bache, T.C., S.R. Bratt, J. Wang, R.M. Fung, C. Kobryn, and J. Given (1990). The Intelli-

gent Monitoring System, Bull. Seis. Soc. Am. 80, 1833-1851.

       Benoit, J.P., G. Thompson, K. Lindquist, R. Hansen, and S.R. McNutt (1998). Near-real-

time WWW-based monitoring of Alaskan volcanoes: The Iceweb system, Eos Trans. Amer. Geo-

phys. U. 79, No. 45, F957.

       Berger, J., R.G. North, R.C. Goff, and M.A. Tiberio (1984). A Seismological Data Base

Management System, Bull. Seis. Soc. Am. 74, 1849-1862.

       Berger, J. and D. Chavez (1997). The IDA Near-real-time system, Seis. Res. Lett. 68, 223-


       Bittenbinder, A. (1994). Earthworm: A Modular Distributed Processing Approach to Seis-

mic Network Processing, Eos Trans. Amer. Geophys. U. 75, No. 44, 430.

       Bödvarsson, R., S. Th. Rögnvaldsson, S.S. Jakobsdóttir, R. Slunga, and R. Stefánsson

(1996). The SIL Data Acquisition and Monitoring System, Seis. Res. Lett. 67, 35-46.

       Bödvarsson, R., S. Th. Rögnvaldsson, R. Slunga, and E. Kjartansson (1999). The SIL data

acquisition system--at present and beyond year 2000, Phys. Earth Planet. Inter. 113, 89-101.

        BRTT (1998). Antelope Installation and Operations Manual: Documentation for Antelope

Environmental Monitoring Software, Software Release 4.1. Boulder: Boulder Real-Time Tech-

nologies, Inc., 38 pp.

        Braithwaite, K.S. (1991). Relational theory: Concepts and Application, San Diego: Aca-

demic Press, Inc., 261 pp.

        Dietz, L. D., W. Kohler, and W.L. Ellsworth (1993). A Real-time P-wave Picker (RTP) for

a Fast Earthquake Response System (Earthworm), Eos Trans. Amer. Geophys. U. 74, No. 43, 429.

        Dietz, L., W. Kohler, A. Bittenbinder, B. Bogaert, and B. Hirshhorn (1994). Larva: an

implementation of Earthworm on the USGS Northern California Seismic Network, Eos Trans.

Amer. Geophys. U. 75, No. 44, p. 430.

        Dietz, L., A. Bittenbinder, B. Bogaert, W. Kohler, and C. Johnson (1995). Automatic seis-

mic network processing with Earthworm, Eos Trans. Amer. Geophys. U. 76, No. 46, p. F395.

        Dollar, R.S. and A.W. Walter (1995). Hybrid Analog and Digital Realtime Earthquake

Analysis Using CUSP_RT in a High Speed Network, Eos Trans. Amer. Geophys. U. 76, No. 46, p.


       Gee, L.S., D.S. Neuhauser, D.S. Dreger, M.E. Pasyanos, R.A. Uhrhammer, and B.

Romanowicz (1996). Real-time seismology at UC Berkeley: The Rapid Earthquake Data Integra-

tion Project, Bull. Seis. Soc. Am. 86, 936-945.

       Given, D.D. (1994). ISAIAH: Information on Seismic Activity in A Hurry, Seis. Res. Lett.

65, p. 47.

        Hansen, R.A. and K.G. Lindquist (1996). Array Processing for a Cooperative Arctic Seis-

mology Program, Abstracts from the XXV General Assembly of the European Seismological Com-

mission, Sept. 9-14, 1996, Reykjavik, Iceland.

        Hansen, R.A., K.G. Lindquist, and M. Bifelt (1997). Near-real-time seismic data exchange

for education and research, Abstracts from the Ninth Annual IRIS Workshop, June 8-12, The IRIS


        Hansen, R. H. (2000). One Size Does Not Fit All, Seis. Res. Lett. 71, p. 3-5.

        Harvey, D. J., F. L. Vernon, R. Hansen, K. Lindquist, D. Quinlan, and M. Harkins (1998).

Real-time integration of seismic data from the IRIS PASSCAL broadband array, regional seismic

networks, and the global seismic network, Eos Trans. Amer. Geophys. U., 79, No. 45, p. F567.

        Harvey, D. J. (1999). Evolution of the Commercial ANTELOPE Software. http://

       Hauksson, E., P. Maechling, and H. Kanamori (1994). Real-time earthquake monitoring

using Terrascope, Seis. Res. Lett. 65, p. 47.

       Havskov, J., L.B. Kvamme, R.A. Hansen, H. Bungum, and C.D. Lindholm (1992). The

Northern Norway seismic network: design, operation, and results, Bull. Seis. Soc. Am. 82, 481-


       Johnson, C. E. (1979). I. CEDAR--An Approach to the Computer Automation of Short-

Period Local Seismic Networks. II. Seismotectonics of the Imperial Valley of Southern Califor-

nia, Ph.D. Thesis, California Institute of Technology, 332 pp.

       Johnson, C.E., A. Bittenbinder, B. Bogaert, L. Dietz, and W. Kohler (1995). Earthworm: A

Flexible Approach to Seismic Network Processing, IRIS Newsletter XIV, Number 2, 1-4.

       Kjartansson, E. (1996). Database for SIL Earthquake Data, Abstracts from the XXV Gen-

eral Assembly of the European Seismological Commission, Sept. 9-14, 1996, Reykjavik, Iceland.

       Kleinman, S., D. Shah, and B. Smaalders (1996). Programming with Threads, Mountain

View: SunSoft Press, 534 pp.

       Lahr, J.C. (1989). HYPOELLIPSE/Version2.0: A computer program for determining local

earthquakes hypocentral parameters, magnitude, and first motion pattern, U.S. Geological Survey

Open File Report 89-116, 92 pp.

       Lee, W.H.K., ed. (1989). Toolbox for seismic data acquisition, processing, and analysis:

IASPEI Software Library, Volume 1: Seismological Society of America, El Cerrito, 284 pp.

       Lindquist, K.G. and R.A. Hansen (1995). Relational Database Implementation for Near-

real-time Earthquake Monitoring, Eos Trans. Amer. Geophys. U. 76, No. 46, F395.

       Lindquist, K.G., J.P. Benoit, and R.A. Hansen (1996). Advances to near-real-time spectral

monitoring at volcanoes with an Iceworm-based Implementation, Eos Trans. Amer. Geophys. U.

77, No. 46, F451.

       Lindquist, K.G. (1998). Seismic Array Processing and Computational Infrastructure for

Improved Monitoring of Alaskan and Aleutian Seismicity and Volcanoes. Ph.D. Thesis: Univer-

sity of Alaska, Fairbanks, 272 pp.

       Lindquist, K.G. and Roger A. Hansen (1998). Seismic network improvements in Alaska:

broadband stations and digital telemetry.” Eos Trans. Amer. Geophys. U., 79, No. 45, p. F567.

       Lindquist, K.G. and R.A. Hansen (1999a). Antelope-based response software for alarm

events, in Abstracts from the Eleventh Annual IRIS Workshop, June 9-12, The IRIS Consortium.

        Lindquist, K.G., R.A. Hansen, and T. Fricke (1999b). Detection of Arctic seismicity

missed by other regional and worldwide catalogs. Eos Trans. Amer. Geophys. U. 80, No. 46, p.


        Lindquist, K.G. and R.A. Hansen (2000). Overview of the Alaskan experience with multi-

scale, multi-project, multi-sensor real-time geophysical data processing, Seis. Res. Lett. 71, 233.

        Lindquist, K.G. (2000). An Antelope Toolbox for Matlab, in preparation.

        Maechling, P., P. Small, K. Hafner, S. Rajeshuni, K. Hutton, J. Polet, E. Hauksson, D.

Given, A. Walter, and L. Jones (2000). SCSN/TriNet Solutions to Common System Design

Issues, Seis. Res. Lett. 71, p. 233.

        Malone, S. (1995). SUNWORM: The Pacific Northwest Seismograph Network Real-time

Seismic Recording and Processing System, Eos Trans. Amer. Geophys. U. 76, p. F395.

        Malone, S. (1996). The Electronic Seismologist: "Near" Real-time Seismology, Seis. Res.

Lett. 67, No. 6, 52.

        Malone, S. (1999). The Electronic Seismologist: Seismic Network Recording and Pro-

cessing Systems I, Seis. Res. Lett. 70, 175-178.

        Nakamura, Y. (1994). Urgent Earthquake Detection and Alarm System (UrEDAS), Seis.

Res. Lett. 65, p. 47.

        Nolet, G. (1995). PEPP - Princeton Earth Physics Project, IRIS 2000: Section II, Scientific

Contributions, from the proposal A Science Facility for Studying the Dynamics of the Solid Earth,

submitted to the National Science Foundation, The IRIS Consortium, 129 pp.

        Ottemüller, L. and J. Havskov (1998). User manual, Automatic Data Collection and Event

Detection for Seismic Networks, Institute of Solid Earth Physics, University of Bergen, 30 pp.

        Ottemüller, L. and J. Havskov (1999). SeisNet: A general purpose virtual seismic network,

Seis. Res. Lett. 70, 522-528.

        Quinlan, D. M. (1994). Datascope: A Relational Database System for Scientists, Eos

Trans. Amer. Geophys. U. 75, No. 44, 431-432.

        Robinson, M.R., C.A. Rowe, G.H.C. Sonafrank, and J.D. Davies (1991). Xpick demon-

stration: seismic analysis software system at the University of Alaska Geophysical Institute, Seis.

Res. Lett. 62, p.23.

        Rowe, C.A. and R. Hansen (1996). Earthquakes in Alaska: February, 1995, Alaska State

Seismologist’s Report 95-01-02, 22 pp.

       Rowe, C.A., G.H.C. Sonafrank, and J.D. Davies (1991). Data Analysis at the University of

Alaska Geophysical Institute seismology laboratory, Seis. Res. Lett. 62, p.23.

       Sonafrank, C., J. Power, G. March, and J. Davies (1991). Acquisition and automatic pro-

cessing of seismic data at the Geophysical Institute, Seis. Res. Lett. 62, p.23.

       Tapley, W.C. and J.E. Tull (1990). SAC Command Reference Manual, version 10.5d,

August 31, 1990, Regents of the University of California.

       Townshend, J. (1996). Unique Borehole for Seismic Instrumentation, Abstracts from the

8th Annual IRIS Workshop, Semi-ah-moo, Washington, 127 pp.

       von Seggern, D. H., G.P. Biasi, and K.D. Smith (2000). Network Operations Transitions to

Antelope at the Nevada Seismological Laboratory, Seis. Res. Lett. 71, 444-448.

       Wald, D.J., V. Quitoriano, L.A. Dengler, and J.W. Dewey (1999). Utilization of the Inter-

net for Rapid Community Intensity Maps, Seis. Res. Lett. 70, 680-693.

       Ward, P.L. and M.P. Anderson (1994). Database for the Northern California Seismic Net-

work Using SUDS and a Computer-Independent Windowing Toolkit, Seis. Res. Lett. 65, 50.

       Wessel, P. and W.H.F. Smith (1991). Free software helps map and display data, Eos.

Trans. Am. Geophys. U. 72, 444-446.

       Wu, Y., T. Shin, C. Chen, Y. Tsai, W.H.K. Lee, and T.L. Teng (1997). Taiwan Rapid Earth-

quake Information Release System, Seis. Res. Lett. 68, 931-943.

Author Affiliation:
       Geophysical Institute
       University of Alaska, Fairbanks
       903 Koyukuk Drive
       Fairbanks, AK 99775 (K.G.L.) (R.A.H.)

                                     Figure Captions
Figure 1) This map shows the current seismograph network of the University of Alaska. Red tri-

angles indicate short-period stations; blue triangles indicate broadband stations. Stations outside

the Alaskan geographic boundaries are run by the Pacific Geoscience Center in Canada and

shared with Alaska via an Antelope orb connection. 266 stations are shown, comprising 548 chan-

nels of data. The solid red lines indicate major faults; the solid yellow line is the trans-Alaska oil

pipeline. White lines show the notable roads in Alaska.

Figure 2) Iceworm handles data from a multitude of input data formats and sources, seamlessly

and transparently making them available to both Earthworm and Antelope processing modules,

archiving, and data-distribution and display software.

Figure 3) This schematic diagram shows the flow of Alaskan continuous waveform data into the

processing system in Fairbanks. Short-period data are brought via various analog telemetry strate-

gies into Fairbanks or Anchorage, where they are digitized. Data in Anchorage are concentrated

on a Sun Workstation and shipped via Antelope orb connection over Department of the Interior

intranet to Fairbanks in near-real-time. Connections to the IRIS/GSN and IRIS/IDA stations

COLA, ADK, and KDAK bring these near-real-time data into Iceworm. Fifteen stations are con-

centrated at the Alaska Tsunami Warning Center in Palmer, Alaska and sent via an Earthworm

connection over leased-line intranet to Fairbanks. Broadband stations installed through the

CREST Federal/State tsunami monitoring initiative are either sent via serial line into Fairbanks,

or put on a PC running the Guralp SCREAM software at the field site and from there transmitted

to Fairbanks via internet. Stations installed through the Princeton Earth Physics Project are simi-

larly put on internet with SCREAM software at the individual high-schools, then transmitted con-

tinuously via internet in near-real-time to Fairbanks.

Figure 4) The Alaskan seismograph network participates in several Virtual Seismograph Net-

works (VSN) to enhance Alaskan earthquake monitoring and to share data with global networks.

VSN connections include Hokkaido, Japan; the Pacific Geoscience Center in Victoria, British

Columbia; the U.S. National Seismograph Network in Golden via the orb2vdl software; the Uni-

versity of California, San Diego; and the Infrasound Laboratory in Kona, Hawaii.

Figure 5) Parametric data exchange with the Alaskan seismic system includes hypocenter intake

from the National Earthquake Information Center, from the Alaska Tsunami Warning Center,

from the Canadian Pacific Geoscience Center, and from the Yellowknife array in Canada. Auto-

matic and reviewed hypocenters from Iceworm and AEIC are distributed in various formats to

multiple institutions. World-wide-web-based participation in the Community Internet Intensity

Map (CIIM) project [Wald et al., 1999] is included.

Figure 6) An extensive computer-system layout accomplishes the several goals of load-sharing

and elimination of single-points-of-failure. In this figure, large square boxes indicate Sun Work-

stations and small square boxes are Intel-architecture Personal Computers used for Earthworm

digitizers and for running Guralp SCREAM software. The collection of Alaskan high-schools

running SCREAM for the Princeton Earth Physics Project and the joint Earthworm/NOAA sys-

tem of the Alaska Tsunami Warning Center are shown as oval-cornered boxes. Solid arrows indi-

cate the flow of near-real-time, continuous waveform data.

Figure 7) This figure shows an overview schematic of the near-real-time processing tasks in Ice-

worm. All continuous-waveform data-flow along critical paths is handled via the Antelope orb

mechanism. For illustrative purposes, only one orbserver is shown. An embedded Earthworm sys-

tem runs automatic picking and association (earthquake location) programs. Custom-built local-

magnitude calculators and database generators produce an automatic catalog of hypocenters in a

relational database, which is then available for analyst review. Forty days of continuous waveform

data are maintained online, stored with the Antelope orb2db utility. Data segmentation for indi-

vidual events is accomplished after analyst review, with segmented-data waveforms stored online

for 13 years of detected events. A parallel automatic processing system is run using the Antelope

real-time detection and location tools, with the ultimate aim of integrating the output of the vari-

ous component systems.

Figure 8) This figure shows the 11231 earthquakes catalogued by the AEIC between 1997 and

1998, used for evaluation of Iceworm system performance as described in the text.

Figure 9) A scatter-plot of distance-residuals vs. origin-time-residuals (one asterisk per earth-

quake) between Iceworm automatic solutions and AEIC analyst-reviewed solutions allows analy-

sis of Iceworm performance. While explicit tracking is possible of the change in hypocenter

location from automatic solution to reviewed solution, such tracking was not yet implemented for

the time period of the study, requiring an after-the-fact association of the Iceworm catalog with

the AEIC catalog. The plot is truncated when residuals are beyond the outside-reasonable limits

of 300 seconds and 3 degrees distance. This plot answers the question “For each AEIC hypo-

center, what are the residuals to the closest Iceworm hypocenter.” If an Iceworm event has suffi-

ciently small origin-time and distance residual from an AEIC event, Iceworm is assumed to have

succeeded in locating the AEIC Catalog event.

Figure 10) The origin-time (figure A) and distance (figure B) residuals between Iceworm hypo-

centers and AEIC hypocenters shown in the previous figure are binned into histograms of 5-sec-

onds width and 0.1-degree width, respectively. This plot indicates that the vast majority of

Iceworm hypocenters are within 5 seconds of the analyst-reviewed origin time and within 0.1

degrees of the analyst-reviewed epicentral location.

Figure 11) In this figure, automatically generated Iceworm local magnitudes are plotted against

AEIC analyst-reviewed local magnitudes, with one symbol for each earthquake. The solid line is a

guide to the eye only: earthquakes along this line have perfect agreement between the automatic

and analyst-reviewed magnitudes. Iceworm is biased towards overestimating magnitudes for

smaller (ML<2) earthquakes, presumably due to the effects of telemetry noise on magnitude mea-

surements; these events, however, are in a magnitude range that is less critical for hazard response

than the larger earthquakes.

Figure 12) When Iceworm event locations are subtracted from AEIC event locations, the follow-

ing scatter plots result. Each triangle represents a single earthquake. The horizontal axis is, for

convenient display, the azimuth of the earthquake from the centrally-located station MCK in

McKinley National Park, Alaska. The top plot shows the distance between the automatic and

reviewed hypocenters in degrees. The middle plot shows the difference in azimuth in degrees

from MCK for the two hypocenters, and the bottom plot shows the depth residual in kilometers.

The point of this figure is to show that while many of the automatic hypocenters provide a good

ballpark estimate of earthquake location, analyst-review is still necessary before releasing these

solutions to the general public.

Figure 13) This map shows alarm-event earthquakes that were inside the Earthworm binder_ew

association grid but were nevertheless missed by binder_ew. Thirty-five events are plotted. The

relatively few missed events indicate quite good performance of binder_ew within its association


Figure 14) This scatter plot shows one symbol for each earthquake in the 1997-1998 time period

for which an AEIC information-release was issued. The horizontal axis is the analyst-reviewed

local magnitude and the vertical axis is the automatically-determined local magnitude. While the

alarm-event magnitudes still need to be reviewed before release to the general public, they are

good enough to give a general impression of earthquake size to duty personnel.

Figure 15) These three scatter plots, as in the previous figure for the entire catalog, illustrate the

quality of agreement between automatic and analyst-reviewed hypocenter location estimates for

alarm-events on which AEIC information releases were issued between 1997 and 1998. The scat-

ter in the residuals is much lower than that for the full catalog, illustrating the improved

binder_ew locations when many picks are included, as is the case for larger events. This figure

indicates that automatic hypocenters for alarm events are definitely accurate enough to be used for

hazard-response purposes.

Figure 1 of Lindquist et al.
Figure 2 of Lindquist et al.
Figure 3 of Lindquist et al.
Figure 4 of Lindquist et al.
Figure 5 of Lindquist et al.
Figure 6 of Lindquist et al.
Figure 7 of Lindquist et al.
Figure 8 of Lindquist et al.
Figure 9 of Lindquist et al.


Figure 10 of Lindquist et al.
Figure 11 of Lindquist et al.
Figure 12 of Lindquist et al.
Figure 13 of Lindquist et al.
Figure 14 of Lindquist et al.
Figure 15 of Lindquist et al.

To top