Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

NextGen IOC Sensor Assessment Report

VIEWS: 2 PAGES: 30

									NextGen IOC Sensor Assessment Report
Executive Summary
1.         Introduction
1.1        Context and Motivation
           NextGen
In 2002, the FAA published a Mission Need Statement (MNS #339) for Aviation Weather. This
document addresses “the overall need for and requirements of aviation weather information
and its delivery to users.” The needed capability is summarized by this quote from the
document:
The aviation weather mission need is to (1) detect and forecast operationally significant en
route and terminal weather events (in real time or near real time) on the surface and aloft that
affect the safety, orderliness, and efficiency of NAS operations and (2) disseminate the
information to the appropriate decision makers.
In 2003 Congress endorsed the NextGen concept in response to the realization that the current
system will not be able to meet growing air traffic demand and that a concerted effort was
needed to address this problem. Legislation established the Joint Planning and Development
Office (JPDO) to lead the planning for NextGen. The JPDO has published a Concept of
Operations for NextGen as envisioned in 2025.1 The JPDO also developed a set of operational
improvements needed in order to make the 2025 vision a reality. The JPDO has published the
Weather Concept of Operations2 which expands on the vision for weather in 2025.
The goal of the Next Generation Air Transportation System (NextGen) is to significantly increase
the safety, security, capacity, efficiency, and environmental compatibility of air transportation
operations by 2025. A cardinal principle is to provide DSTs information designed for specific
NextGen operational decisions. The weather information suitable for such DSTs must be
suitable in terms of content, specification, accuracy, spatial and temporal resolution,
consistency, refresh rate, and availability. They must be usable directly by these tools in the
sense that the formats, and access methods, are machine interpretable-and multiple users
collaborating on decisions can benefit by having the same “Common weather picture.” By
assimilating weather into decision making, weather information becomes an enabler for
optimizing NextGen operations
           4D weather cube
 Under NNEW, the improved weather observations, model outputs, analyses, and forecasts will
reside in a virtual repository known as the 4-D Weather Data Cube, which contains all
unclassified weather information used directly or indirectly to make aviation decisions. The 4-D
Weather Data Cube does two things. First, it provides complete and efficient access to the
weather information (observations, analyses, and forecasts) required by decision makers in the
NAS. Second, it provides complete and efficient access to the observations required by weather

1
    NextGen Concept of Operations v2.0, 13 June 2007
2
    Weather Concept of Operations Version 1.0, May 13, 2006
information producers to make those analyses and forecasts. Selected weather data from the
4-D Weather Data Cube will be merged and processed to provide a consistent common
weather picture to support ATM decision making known as the 4-D Weather Single
Authoritative Source (SAS)Thus, for a given point in space and time, there will be only one
observation or forecast used for ATM decision making.
The SAS is to be a key element of the NextGen 4-D Weather Data Cube. The SAS will manage an
integrated, distributed virtual database of local, regional, nationwide and global weather
information from many NAS and non-NAS sources. As a virtual database, the SAS neither
performs calculations on its data, nor produces derived data products. The SAS will integrate a
variety of observations at various spatial and temporal resolutions into a ‘modeling capability’
that will ‘de-conflict’ overlapping observations and conflicting forecasts and provide a baseline
set of gridded weather fields accessible to all Air Traffic Management decision makers for a
‘common weather picture’ across the entire NAS. SAS weather information will ultimately help
to determine safe and efficient flight clearances based on the performance parameters of the
individual aircraft, their pilots proficiency, and the aircraft’s trajectory-based, intended path as
modified to avoid current and developing weather hazards. The SAS weather information
       Weather observation and forecast requirements for meeting NextGen goals
For RWI, improved weather observational information will be required for NextGen users to
accurately and quickly assess the state of the atmosphere and to support accurate forecasts of
future weather impacting NAS operations. The goal is to significantly improve detection of
aviation impact weather (e.g., turbulence and icing) and to right-size sensor configurations and
ground infrastructure. Optimized, externally controllable and configurable ground-based,
airborne, and satellite atmospheric-sensing networks that provide higher resolution weather
observations will be developed to support more accurate weather forecasts and directly detect
aviation safety hazards; thereby supporting more accurate weather forecasts. A focus area will
be to consolidate and replace radars, as well as to improve weather radars technologies.
The second improvement area under RWI, NextGen Weather Forecasting improvements, will
build on the improved observations to provide better analyses and forecasts of aviation-
relevant weather phenomena. Improved analysis/forecast information will allow users to safely
plan and conduct 4-D, gate-to-gate, trajectory-based operations that avoid aviation relevant
hazards and meet operational user business rules. Enhanced weather forecasts will be
developed for icing, turbulence, wind shear, ceiling and visibility, volcanic ash dispersion, and
space weather. The integration of these forecasts into a consistent common weather picture
will also be used to develop:


           The environmental information needed to reduce noise propagation, dispersion of
            aerosols, and exhaust impacts
           The information regarding the transport and decay of wake vortices needed to
            reduce aircraft separation
           The space weather forecast information needed to mitigate the harmful effects of
            solar radiation on the health of flight crews and passengers while minimizing
            impacts on communications, navigation, and other NextGen systems
Improvements will also include new and enhanced quality assessment techniques for forecast
product accuracy, weather-specific and higher resolution numerical models to support
diagnostic and probabilistic forecast processes, as well as establishing and monitoring metrics
to evaluate the progress of NextGen.
The third improvement area under RWI, weather integration support for decision support, will
utilize the observational, forecast (including probabilities), and network capabilities to provide
decision makers appropriate weather information. The management of air traffic, especially
within the context of NextGen operations, is very complicated. The widespread use of
computers and sophisticated management tools are required if the goals of NextGen are to be
met. To ensure weather information is an enabler for optimizing NextGen operations, weather
information will be translated into information that is directly relevant to NextGen users and
service providers through their DST, such as the likelihood of flight deviation, airspace
permeability, and capacity. Rather than a wide variety of human decision makers using a small
number of aviation weather products (i.e. deterministic text and graphics), weather
information (i.e., probabilistic and digital) will be shaped to fit the DST that will optimize NAS
efficiency, manage flight operations, separation management, capacity management, trajectory
management, and airport operations. It will also mean all decision makers and DST will access
the same consistent weather. DSTs will subscribe to data from the 4-D Weather Data Cube to
incorporate weather data and bypass the need for human interpretation, allowing decision
makers to determine the best response to weather’s potential operational effects (both tactical
and strategic) and minimizing the effects of weather on NextGen operations.
1.2     RWI Sensor Right-Sizing Program Goals
The FAA has identified, in regards to weather data, four improvement areas: observations,
forecasts, integration, and dissemination. Figure 2 depicts the allocation of these improvement
areas to NNEW and RWI. Under RWI, the density and quality of atmospheric weather
observations will be improved by optimizing/developing atmospheric-sensing networks.
Improved weather observations will, in turn, enable improved weather modeling, analysis, and
forecasting.
                 To Improve NAS Performance


                     JPDO defined weather operational improvements
                     (OI’s) which reduce weather impact.

                                                                         Thrusts        Capability

                          Provide improved access        to   weather
                          information by all users                      Dissemination      NNEW


                          Improve weather observations
                                                                        Observation



                          Improve quality of current and forecast
                                                                          Forecast          RWI
                          weather information

                          Integrate weather location, severity and
                          impact information data into operational       Integration
                          decision making


        Assessment of sensor network for meeting NextGen weather observation
         requirements
The first step in the acquisition, processing and dissemination of weather data is the actual
sensing of the state of the atmosphere. This sensing is performed by a broad network of
sensors owned and managed by a wide variety of governmental agencies, both federal and
state. This weather information is used in two basic ways; first as input to forecast systems and
processes which project the future state of the atmosphere based on its current state. The
second usage is the direct dissemination of weather data and observed phenomena to
stakeholder like pilots and ATC personnel to enhance the safety and efficiency of flight
operations.
The sensor network encompasses a multitude of different technologies, including;
        o Satellites Geosynchronous and Polar orbiters
        o Radars, primarily NEXRAD and TDWR
        o Surface sensors
                 ASOS
                 AWOS
                 RVR
                 LLWAS
                 AWS
                 SAWS


        Identification of gaps based on functional and performance requirements
The functional requirement set formed the basis and framework of the IOC assessment activity.
Because the functional requirements inform the scope of the information required by the
NexGen architecture, and because they do not provide the hard values that could color the
results of the activity, they were the ideal starting point for the assessment. Although the
original organization of the functional requirements was not optimal, it was maintained in order
to preserve traceability between the functional requirements and any future sets of
performance requirements. During the assessment activity the various teams reviewed the
complete list of performance requirements and selected those requirements that fell within
their areas of expertise.
The portfolio and performance requirements played very little role in the early sensor
assessment activities. The sensor network’s IOC capabilities were assessed in the areas
described by the functional requirements. The main activity involving the performance
requirements was to provide a mapping of the portfolio requirements to the functional
requirements. This activity will also be done for the performance requirements when they
become available. Now that we have a mapping of detailed requirements to the detailed
performance assessments, the process of locating and analyzing gaps can proceed. Once the
gaps between required performance and current performance are identified, we can begin to
plan for the mitigation and elimination of said gaps.
         Development of master plan for satisfying NextGen weather observation
          requirements
         Multi-year ongoing effort
1.3       Scope of this Report (FY 2009)
         Assessment of sensor network for IOC
The sensor IOC assessment matrix was hosted in two different environments during the course
of the activity; as an Excel spreadsheet and as a web form whose layout mirrored that of the
original Excel document. The matrix was also designed to be compatible with the work done by
NOAA on the NOSA database surveys. The spreadsheet lists all of the 309 functional
requirements identified by the requirements group, and allows the observing
systems’/platforms’ characteristics that satisfy those requirements to be recorded and
compared. The recorded            characteristics were chosen by the RightSizing team and
consolidated as a list that included parameters such as sensor names, operators, accuracies,
and horizontal and vertical coverages. To populate the matrix, each team provided inputs to
each functional requirement for platforms and sensors within their field of expertise. In
addition to the information provided by the teams, certain reference materials were also
included.
         Preliminary identification of observational gaps based on functional requirements
2.        Program Management and Schedule
2.1       Timeline and Deliverables
         FY09
The schedule for the right sizing program for FY09 is shown in figure 2.1. The program
milestones, activities and deliverables are shown on the schedule. Although the timeline was
limited for such a comprehensive review and further shortened by program funding delays, the
goals and the schedule of the project were met, and produced deliverables with superior
quality and value.




Figure 2.1 FY09 Schedule
       FY10 and beyond (I will add editorial content to explain the nature of the tasks herein)
2.2       Team Organization and Responsibilities
         FY09 team (What I would like here is a brief bio (a couple of lines) of each team
          member, of course stressing their quals relative to the RightSizing Program, I will allow
          all of you provide these yourselves, I could just make them up myself, but , well you
          can imagine.)
          -   FAA Points of Contact (Pocks)
          o Victor Passetti
          o Tammy Farrar
          o Dino Rovito
          o Mike Richards
             o Frank Law
             o Ernest Sessa
    NCAR
    The NCAR Research Applications Laboratory (RAL) has an extensive research program focused
    on aviation (approximately $11M/year supported by FAA, NASA, NWS, and international
    sponsors). In particular, people working under this program have expertise in the areas of
    convective and winter storms, ceiling and visibility, turbulence, in-flight icing, ground
    deicing, and wind shear. Moreover, significant efforts are underway that address data
    visualization and dissemination, and integration with decision support tools. All these research
    and development efforts are geared toward satisfying requirements for network enabled
    weather and weather forecasting capabilities for the NextGen.
-
                       Mathias                               303-497-      Team Lead
                       Steiner      msteiner@ucar.edu        2720
                       Paul         herzegh@ucar.edu         303-497-      ground,
                       Herzegh                               2820          ceiling &
                                                                           visibility
                       David        djohnson@ucar.edu        303-497-      satellite,
                       Johnson                               8370          lightening,
                                                                           wind
                                                                           shear
                                                                           detection
                       Roy       rasmus@ucar.edu             303-497-      ground,
                       Rasmussen                             8430          LWE
                       Larry                                 303-497-      in-situ,
                       Cornman      cornman@ucar.edu         8439          turbulence
                       Michael                               303-497-      space
                       Wiltberger   wiltbemj@ucar.edu        1532          weather


    MIT/LL
    MIT Lincoln Laboratory has played a key role in the development of weather radar systems,
    particularly with respect to aviation needs. Among these systems are the TDWR and ASR-9
    WSP. It has also developed weather products for the FAA based on other sensors such as the
    NEXRAD and Doppler lidar. Lincoln Laboratory is currently working on an advanced radar that
    will be capable of performing aircraft and weather surveillance simultaneously, the
    Multifunction Phased Array Radar (MPAR). Weather data integration and decision support
    systems for aviation is also a strong focus at the Lab, both at the terminal (ITWS) and national
    (CIWS) levels. Sensor network coverage and cost-benefit analyses have been a part of Lincoln
Laboratory’s effort for the FAA as well. All of this expertise and infrastructure will be leveraged
to provide radar and lidar coverage and performance characterization, as well as gap analysis
and mitigation plan, for the RWI Sensor Right Sizing program.
                     Dr.
                     John                              781-981-
                     Cho        JYNC@LL.MIT.edu        5335
                     Dr.                                            radar / Lidars;
                     Suilou                            781-981-     fax 781-981-
                     Huang suilou@ll.mit.edu           2172         0632


OU and IU
                    Dr. Kelvin kkd@ou.edu                    405-325-
                    Droegemeir                               6561
                    Andrew         areader@ou.edu            405-325-
                    Reader                                   1869
                    Jerry          Jbrotzge@ou.edu           405-325-
                    Brotzge                                  5571
                    Fred Carr      fcarr@ou.edu              405-325-
                                                             6561
                    Chris          ChrisFiebrich@ou.edu      405-325-
                    Fiebrich                                 6877
                    Dr.   Beth plale@cs.indiana.edu          812-855-
                    Plale                                    4373
                    Suresh         smarru@cs.indiana.edu     812-855-
                    Marru                                    4081
                    Scott          scjensen@cs.indiana.edu 812-855-
                    Jensen                                 9761


         Expanded team membership for FY10 and beyond
          o ESRL access to MADIS
          o NSSL access to NMQ
2.3       Leveraging and Collaborations/Interactions
         NOSA assessment and database
The NOSA database was used as important source of information through out the sensor IOC
assessment. Many of the sensor performance parameters were chosen to closely reflect those
parameters included in NOSA platform surveys. NOSA survey information was included for
each NOSA parameter name that matched or closely matched the phenomena reference in the
functional requirements. This information was included in the spreadsheet and properly
attributed to NOSA as the source of the data. The data can then be assessed and compared to
the information provided by the right sizing team
During 2002 Vice Admiral Conrad C. Lautenbacher, Jr., USN (Ret.) called for a fundamental
review of NOAA's strengths and opportunities for improvement. A Program Review Team
reviewed and debated issues and developed suggestions for building a better NOAA. These
suggestions led to 68 specific recommendations.
Recommendation 32 addressed centrally planning and integrating NOAA observing systems and
indicated a clear need for a NOAA-wide observing system architecture. The NOAA
Administrator responded:
I concur with the PRT recommendation that NOAA centrally plan and integrate all observing
systems. I will assign this responsibility to a matrix management team, with NESDIS providing
the program manager. I do not currently endorse the PRT recommendation to assign acquisition
authority for all observing systems to NESDIS.
NESDIS should lead a cross-cut team to develop an observational architecture commencing
immediately. This should capitalize on on-going efforts (e.g., coastal observations). This
architecture should capture the state today as well as the future state (e.g., 10 to 20 years).
With this architecture, NOAA would be able to assess current capabilities and identify short-
term actions.
A cross-cutting team led by NESDIS should conduct a systemic review of all other observing
systems. The following factors should be considered for observing systems to determine the
desirability of consolidating them:
          The required characteristics of the system (i.e., reliability, performance,
           maintainability)
          The number of and types of users of the system
          The estimated value of the capital asset and its recurring maintenance cost
NOAA can manage its observation system more efficiently and effectively with architecture that
defines a consistent set of principles, policies, and standards. The NOAA Observing System
Architecture (NOSA) Action Group, directed by the NOSA Senior Steering Group, was
established to develop an observational architecture that helps NOAA:
          design observing systems that support NOAA's mission and provide maximum value,
          avoid duplication of existing systems, and
          operate efficiently and in a cost-effective manner.
           NOSA includes:
          NOAA's observing systems (and others) required to support NOAA's mission,
          The relationship among observing systems including how they contribute to support
           NOAA's mission and associated observing requirements, and
           the guidelines governing the design of a target architecture and the evolution
            toward this target architecture
The RightSizing Sensor Assessment activity has made extensive use of this valuable resource in
developing the programs deliverables especially the Sensor IOC Spreadsheet. Queries were
developed that retrieved relevant information from the NOSA database for many of the
Functional Performance Requirements and returned it in a format that facilitated ingest into
the RightSizing tools. All in all several hundred entries were made utilizing information from
NOSA. The quality of the information was generally quite good, although there were some
instances and data fields where the information returned was anomalous. Our general
approach in these cases was to leave the entries intact as they were returned. In questionable
cases additional entries were created correcting or clarifying the fields in question.
       NSF Facilities Assessment Database
The National Science Foundation (NSF) Lower Atmospheric Observing Facilities Program Office
and the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL) are
conducting a community-wide assessment of atmospheric science related instrumentation. This
assessment is considering facilities across government agencies, universities, national
laboratories, international organizations and private companies. This assessment is considering
a wide breadth of technologies including currently available instrumentation as well as systems
under development. This database is currently being configured from the collection of
information on observing facilities, platforms, and instruments.
This tool is focused on the performance and operating details of a wide range of mesonet type
systems spanning the entire US. Since this information is organized on a network performance
basis rather than a sensor performance basis, information from this tool was not incorporated
into the initial incarnation of the Sensor Assessment Deliverable. Moving forward it is
anticipated that this tool will provide valuable information in terms of determining geospatial
parameter coverages and in provided insights into Gap Filling strategies.
       NRC “Network of Networks” report
The Committee envisions a distributed adaptive “network of networks” serving multiple
environmental applications near the Earth’s surface. Jointly provided and used by government,
industry, and the public, such observations are essential to enable the vital services and
facilities associated with health, safety, and the economic well-being of our nation.
In considering its vision, practical considerations weighed heavily on the Committee’s
deliberations and in the formulation of its recommendations. To that end, the study
emphasizes societal applications and related factors influencing the implementation of an
enhanced observing system, the intent of which is to markedly improve weather-related
services and decision making. The Committee considered the various roles to be played by
federal, state, and local governments, and by commercial entities. In essence, the study
provides a framework and recommendations to engage the full range of providers for weather,
climate, and related environmentally sensitive information, while enabling users of this
information to employ an integrated national observation network effectively and efficiently in
their specific applications.
This study does not attempt to compile an exhaustive catalogue of mesoscale observational
assets, although it identifies and summarizes numerous important sources for such
information. Nor does this study attempt to design a national network, although it does identify
critical system attributes and the ingredients deemed essential to retain sustained importance
and relevance to users.
The RightSizing team remained cognizant of AMS’s activities in support of this report and seeing
its recommendations come to fruition. The entire team attended the AMS summer meeting in
Norman, OK in support of these efforts. It is anticipated that future RightSizing effort’s goals
will remain compatible and synergistic with the goals of the report.
       OFCM
The Office of the Federal Coordinator for Meteorological Services and Supporting Research,
more briefly known as the Office of the Federal Coordinator for Meteorology (OFCM), is an
interdepartmental office established because Congress and the Executive Office of the
President recognized the importance of full coordination of federal meteorological activities.
The Department of Commerce formed the OFCM in 1964 in response to Public Law 87-843.
Samuel P. Williamson is the Federal Coordinator. Their mission is to ensure the effective use of
federal meteorological resources by leading the systematic coordination of operational weather
requirements and services, and supporting research, among the federal agencies. Victor
Passetti and Thomas Carty are members of the OFCM and have briefed the activities, plans and
intent of the RightSizing Program and the Sensor Assessment initiative to the group on multiple
occasions.
       NWS
I am not sure what to say about this one.
       Broader community efforts (e.g., MADIS, Clarus, etc)
One of the first tasks of the RightSizing program was to establish a connection to the MADIS
data distribution network and to make this data available for study. The MADIS networks
represent a stable, reliable and quality checked source of a wide variety of nontraditional data
sources and systems. The RightSizing program will leverage this data source to study the value
of incorporating these types of data networks in the NexGen environment.
The Meteorological Assimilation Data Ingest System (MADIS) is dedicated toward making value-
added data available from the National Oceanic and Atmospheric Administration's (NOAA)
Earth System Research Laboratory (ESRL) Global Systems Division (GSD) (formerly the Forecast
Systems Laboratory (FSL)) for the purpose of improving weather forecasting, by providing
support for data assimilation, numerical weather prediction, and other hydro meteorological
applications.
MADIS subscribers have access to an integrated, reliable, and easy-to-use database containing
the real-time and archived observational datasets described below. Also available are real-time
gridded surface analyses that assimilate all of the MADIS surface datasets (including the highly-
dense integrated mesonet data). The grids are produced by the Rapid Update Cycle (RUC)
Surface Assimilation System (RSAS) that runs at ESRL/GSD, which incorporates a 15-km grid
stretching from Alaska in the north to Central America in the south, and also covers significant
oceanic areas. The RSAS grids are valid at the top of each hour, and are updated every 15
minutes.
The ESRL/GSD database is available via ftp, by using Unidata's Local Data Manager (LDM)
software, through the use of
Quality Control (QC) of MADIS observations is also provided, since considerable evidence exists
that the retention of erroneous data, or the rejection of too many good data, can substantially
distort forecast products. Observations in the ESRL/GSD database are stored with a series of
flags indicating the quality of the observation from a variety of perspectives (e.g. temporal
consistency and spatial consistency), or more precisely, a series of flags indicating the results of
various QC checks. Users of MADIS can then inspect the flags and decide whether or not to
ingest the observation.
MADIS also includes an Application Program Interface (API) that provides users with easy access
to the observations and quality control information. The API allows each user to specify station
and observation types, as well as QC choices, and domain and time boundaries. Many of the
implementation details that arise in data ingest programs are automatically performed. Users
of the MADIS API, for example, can choose to have their wind data automatically rotated to a
specified grid projection, and/or choose to have mandatory and significant levels from
radiosonde data interleaved, sorted by descending pressure, and corrected for hydrostatic
consistency. The API is designed so that the underlying format of the database is completely
invisible to the user, a design that also allows it to be easily extended to non-ESRL/GSD
databases.
The RightSizing effort also included coordination with the DOT’s Clarus Imitative. The Clarus
program manager briefed the team on the Initiatives progress and capabilities, and a
connection was established with some of the Initiative’s data sources.
The U.S. Department of Transportation (DOT) Federal Highway Administration (FHWA) Road
Weather Management Program, in conjunction with the Intelligent Transportation Systems
(ITS) Joint Program Office established the Clarus Initiative in 2004 to reduce the impact of
adverse weather conditions on surface transportation users.
Clarus is a research and development initiative to demonstrate and evaluate the value of
“Anytime, Anywhere Road Weather Information” that is provided by both public agencies and
the private weather enterprise to the breadth of transportation users and operators. The goal
of the initiative is to create a robust data assimilation, quality checking, and data dissemination
system that can provide near real-time atmospheric and pavement observations from the
collective state's investments in road weather information system, environmental sensor
stations (ESS) as well as mobile observations from Automated Vehicle Location (AVL) equipped
trucks and eventually passenger vehicles equipped with transceivers that will participate in the
Vehicle Infrastructure Integration (VII) Initiative.
3.     Scope and Methodology [MIT/LL & NCAR/RAL]
The broad aim of this study was to characterize the state of weather sensors at NextGen Initial
Operational Capability (IOC) and to assess their aggregate capability to meet the 4D Weather
Cube observation requirements. Given the limited time available, however, we decided to
narrow the focus so that specific and useful findings could be reported. This chapter discusses
the methodology that we used, the motivations for going down this path, and its associated
assumptions and limitations.
3.1    Approach
Although this study is an assessment of sensors, its raison d’être is the weather information
needs of NextGen. Therefore, rather than beginning with a catalog of sensors and their
characteristics, we took the NextGen 4D Weather Cube observation (i.e., functional)
requirements as the starting point and matched the sensors to each defined entry. However,
the performance requirements were not yet available during this study, so the sensor mapping
was limited to the functional requirements (JPDO 2008). To be specific, these are all the
requirements that fall under the function “Observe Atmospheric and Space Conditions.” (Note
that these requirements cover all of the 4D Weather Cube, not just the smaller subset defined
by the 4D Weather Single Authoritative Source (SAS).) The following hierarchical relation puts
this function into the context of NextGen Enterprise Architecture (EA): F0 NextGen Services →
F1 Enterprise Services → F1.1 Provide Weather Services → F1.1.1 Observe Atmospheric and
Space Conditions. Sensor assessment relative to the performance requirements is still needed
and will be conducted in FY2010.
To begin, we needed to create a manageable structure for the sensor assessment data. We
chose to do this in a spreadsheet, and listed the 311 functional requirements down a column.
Any sensor that would provide data to meet a requirement would be inserted as a row under
the requirement entry. The columns to the right were used to list the various sensor
characteristics and other associated information. The column labels are listed below.
       1. Requirement
       2. Change/Author
       3. Comments
       4. Source
       5. Measurement System/Platform
       6. Operational Readiness Status and Timeline
       7. Environmental Parameter Name
       8. NOAA GCMD Variable
       9. Measurement or Derived
       10. Measurement Algorithm
       11. Measurement Units
       12. Measurement Min
       13. Measurement Max
       14. Representative Measurement Accuracy
       15. Representative Measurement Accuracy Units
       16. Representative Measurement Precision
       17. Representative Measurement Precision Units
       18. Representative Measurement Uncertainty
       19. Representative Measurement Uncertainty Units
       20. Data Latency
       21. Environmental Parameter Timeline
       22. Environmental Parameter Timeline Units
       23. Reporting Frequency
       24. Sampling Frequency
       25. Sampling Duration
       26. Measurement Stability
       27. Measurement Extent
       28. Other Key Parameter Properties
       29. Remote Sensing (Yes/No)
       30. User/Platform
       31. Geographic Coverage
       32. Geographic Coverage Description
       33. Horizontal Grid Spacing Units
       34. Representative Horizontal Grid Spacing
       35. Vertical Resolution Units
       36. Representative Vertical Resolution
       37. Associated Spectral Characteristics
       38. Coverage in a GIS Formatted Geospatial Database
       39. Geographical Coverage Data
       40. Coverage Description Web Page
       41. Coverage Description Material
[Is this list necessary? If so, should there be an explanation of each entry? Should they go in an
appendix?]
As described in Chapter 2, multiple sub-teams (with multiple members each) contributed to
gathering the sensor assessment data. Thus, in order to facilitate simultaneous entry by
different individuals, the spreadsheet matrix was translated into a Web-based interactive form
that updated the master database in real time. A separate page was provided for listing the
references cited under the “Source” column.
In addition to individually filling in this assessment matrix, team members held biweekly
teleconferences and gathered for a three-day workshop to exchange information and ideas,
and to formulate plans. Colleagues not included on the teams were also consulted when their
expertise was needed on a given point. Also, as discussed in Chapter 2, other organizations
have produced or are producing catalogs of weather sensors off of which we could leverage. In
particular, we made extensive use of the NOAA Observation System Architecture (NOSA)
database in filling out our assessment matrix.
[Insert description of gap identification process?]
3.2    Priorities and Limitations
A comprehensive coverage of the large assessment matrix (and subsequent analysis and
discussion) given our resource constraints was not possible. Therefore, we established a set of
priorities to guide us on which areas to focus first. We now discuss these prioritization schemes
by category.
Sensor and data ownership and access
Weather sensors are owned and operated by public and private entities. The public sector is
composed of government organizations at all levels—federal, state, county, city, etc. The
private sector is also diverse, including groups such as universities, television stations, power
utility companies, etc. The data produced by these sensors can be categorized as open, closed,
or restricted, but the categorization is not necessarily the same as the sensor ownership status.
For example, a public entity (the military) can keep its data closed (classified), whereas a private
organization (a university) could make its data open to the public. Data access can further be
parsed according to cost (free vs. priced), latency (real-time vs. delayed/archived), format
(standard vs. proprietary), etc.
With regard to our study, federally owned sensors with open-access data garnered top priority.
FAA-invested sensors received a lot of attention, since one of the purposes of this study was to
issue recommendations on future decision points in the EA Weather Sensor Roadmap.
Privately owned sensors were also considered if their data status was open access. Sensors
that were not part of a network tended not to be included. Given the time limitations, and
based on the low probability that they would be available for NextGen use at IOC, we did not
include sensors with closed-access data. Relevant foreign sensor data (such as from the
Canadian weather radars) were not excluded from consideration, but were given low priority.
Sensors and their products
Although the assessment matrix is an attempt to match sensors to weather observation
requirements, in most cases useful weather information (the product) is not obtained directly
from the raw data output of the sensor. Usually, the raw data is processed further within what
is defined to be the sensor’s own hardware and/or outside it. In some cases the processing
incorporates data from other sensors (of the same kind, different kinds, or both). In other
cases the processing combines numerical model output data with the sensor data to generate
the weather product. Therefore, an entry in the assessment matrix is usually a specific sensor
product rather than the sensor itself.
However, since this study was a sensor assessment, we prioritized the inclusion of single-sensor
products. Multi-sensor products and products incorporating model data were included if there
was a possibility of a functional gap without them.


Sensor status
Sensors (and their data products) can be in various stages of technological maturity. Some
sensors have been in operational mode for many years, while others are still considered
research projects. The emphasis was on systems that are currently operational. However,
since the assessment was for a future time (IOC—2013), we also considered sensors and
products that were expected to be ready for operational use by then. Discussion of even more
experimental systems and processing algorithms were included if there was a possibility of a
functional gap without them.
[should we be concerned with maintenance and replacement schedules etc. also?]
Coverage and performance
Because the official weather sensor performance requirements were not available during this
study, we prioritized the information gathering for the sensor assessment matrix to emphasize
functional parameters over specific performance metrics. If left unpopulated this year, the
matrix entries relating to performance, such as measurement accuracy and uncertainty, will be
filled in for next year’s gap analysis that includes performance considerations.
Aviation hazards
Of the long list of weather observation requirements, we focused most intensely on ones that
covered aviation hazards, i.e. phenomena that could lead to loss of lives, injury, aircraft loss or
damage.
Coverage domains
Apropos of the above, we prioritized the analysis of coverage in terminal airspace, as that is the
domain most dangerous for flights. Coverage of the other airspace volumes (en route, global)
was also examined, but less attention was focused on them.
The priorities discussed above implicitly point out some of the limitations of our study. As
mentioned already, we assessed the sensor products relative to the functional requirements
and not the performance requirements. The lower priority (relative to aviation hazards)
requirements were not thoroughly covered, and sensors (and their products) still in the
research and development stage were not characterized fully. Sensors with restricted data
access tended not to be included. This study was clearly an initial cut at an assessment that is
an ongoing effort, for which we plan to expand the scope to include many of these areas.
3.3 Terminology and Ambiguities
In this section we define the terminology used in this report that members of the study team
believed could be a source of confusion to the readers. We also discuss some ambiguities that
we encountered in dealing with the functional requirements.
As a way of categorizing the types of sensors, one of the distinctions we used was ground
based vs. airborne. This division is fairly self-evident, with the criterion having to do with
whether the weight of the sensor is resting on the surface of the planet or being supported in
the atmosphere above it. There are a few cases that may not seem to be so clear-cut, such as
buoy-based and satellite sensors. For the purposes of this report, the former is ground-based
and the latter is airborne. Sensors on a tethered balloon or kite are considered airborne by our
definition, since their weight does not rest on the ground. Sensors mounted on ground-based
vehicles are considered to be ground based.
Another binary division we employed was in situ vs. remote sensing. This characterization
depends on the distance between the sensor and the physical entity from which it obtains
information. A device is in situ if what it observes is either in contact with the sensing element
or is within the physical volume of the sensor. An instrument employs remote sensing if the
entity from which information is obtained is some distance away from the sensor. There are
some potentially ambiguous cases such as the ultrasonic anemometer, where local information
is obtained not by direct contact but through sound emission and receiving, but we include
such cases under in situ, since the measurement is made only within the immediate vicinity of
the sensor.
In general, an in situ sensor provides a point observation, while a remote sensing device yields
a volume observation. However, the term “point” is not used in the mathematical sense of
possessing no volume. In actuality, a point measurement has a zone of high correlation around
it, and this spatial extent should be taken into account when determining the coverage of an in
situ sensor. Furthermore, an in situ sensor situated on a moving platform will trace out a line
over the course of a sampling period, so it is not a point observation even in the loose sense.
The classification of the spatial domains used in this report follows the scheme outlined in the
preliminary portfolio requirements document (Moy 2008). For above-surface observations,
terminal airspace is the volume of airspace within 100 km of airport centerfield from the
ground up to the top of the terminal volume [definition?]. [Does this apply to all airports or just
OEP or…?] En route airspace is the volume of non-oceanic national airspace system (NAS) not
occupied by terminal airspace. Global airspace is the union of oceanic and non-NAS airspace.
For surface observations, the terminal area refers to certain designated areas at [?] airports.
En route area covers the NAS surface areas minus the terminal areas. Global area indicates
surface areas outside the NAS.
Accuracy and precision are often used as complementary terms to characterize the
measurement performance of a sensor. However, according to the International Organization
for Standardization (ISO), both are qualitative terms and have multiple definitions (ISO 1993).
Thus, the use of accuracy and precision should be avoided in expressing quantitative
parameters. Instead we opt to quantify the uncertainty, a parameter that characterizes the
range of values in which the measured value lies within a specified confidence level.
We should also clarify the difference between resolution and reporting quantization. The
former has real physical significance, while the latter is only the fineness of scale at which
measured or derived results are reported or displayed. In the spatial domain, reporting
quantization may be called grid spacing, gate interval, pixel size, etc. In the temporal domain, it
may be referred to with terms such as reporting interval, output frequency, sample spacing,
etc. These quantities should not be confused with the resolution, which defines the range
within which the measurement is valid and independent of the neighboring measurements. It
is possible for resolution and reporting quantization to have the same value, but in general they
do not. If the reporting quantization interval is smaller than the resolution interval, the results
are oversampled; if the reverse is true, then the results are undersampled. [I still don’t
understand why resolution is only applied to the vertical dimension and grid spacing is only
applied to the horizontal dimension in the spreadsheet.]
As discussed in Section 3.2 regarding sensors and their products, various levels of processing
are applied to raw sensor data to generate weather products. If a sensor product is directly
related to the sensor measurement, it is classified as measured. Otherwise, the product is
labeled derived.
[Insert discussion of atmospheric phenomenon vs. parameter/quantity]
As one of the main goals of this sensor assessment is to identify gaps in meeting the weather
observation requirements, we need to discuss what we mean by a gap. At the most basic level,
there could be a theory gap, where there is not enough understanding on how to make
measurements (or even what measurements to make) to meet an observation requirement.
Given the appropriate knowledge, there could still be an engineering gap, where the
technology necessary for building the needed sensor (and/or sensor platform) does not exist
yet. If the sensor is built and deployed for research, time and effort are still needed to bring it
to robust operational status; in the mean time, there is an operational gap. For a derived
product, there will be a product gap until an algorithm for generating it is developed,
implemented, and validated.
[The classification and terminology of these gap types are certainly open for debate.]
With the availability of a sensor product capable of fulfilling a functional requirement, there are
still other types of potential gaps. If the spatial domain over which the requirement is defined
is not completely covered, then there is a spatial coverage gap. If the required time coverage
(e.g. 24/7) cannot be met, then there is a temporal coverage gap. If any of the performance
requirements are not met, then there is a performance gap. There may be a communication
gap if access to the sensor product is restricted or if the data transfer infrastructure is
inadequate, resulting in missed and/or tardy data. And in the context of the NextGen Network
Enabled Weather (NNEW) program and the network-of-networks vision, a metadata gap can
hinder the proper characterization, dissemination, and usage of the sensor product. A dynamic
gap could occur temporarily due to sensor failure, network or power interruption, sabotage,
natural disaster, etc. Finally, any of these gaps can be directly or indirectly produced by a
funding gap.
Although the different gap parameters exist independently, they still need to be examined
within the context of each other. For example, performance parameters are often dependent
on the coverage domain. Therefore, in such a case, a gap should be defined jointly with respect
to both spatial coverage and performance parameters.
4.     IOC Assessment and Key Findings [NCAR/RAL & MIT/LL]
This chapter summarizes key findings from the IOC assessment. Critical sensors and platforms
are identified, and risks to them in the IOC timeframe are pointed out (section 4.1). Projected
gaps with respect to the functional requirements are enumerated and analyzed (section 4.2).
We discuss the important lessons learned in this study (section 4.5), and opportunities for
follow-on activities are highlighted (sections 4.3 and 4.4).
4.1 Sensor catalogue
The assessment spreadsheet provides a catalogue of sensors that are currently in use or have
the potential to contribute relevant information to be utilized by aviation users. This sensor list
hasn’t been analyzed and ordered in terms of relevance and priority for aviation users yet. This
remains to be done as part of the FY10 in-depth gap analysis.
Clearly, the sensor catalogue has its shortcomings. The spreadsheet has grown unwieldy, which
reduces its effectiveness to retrieve relevant information. Some capability to slice and dice the
information in different ways (e.g., by sensor type; by aviation weather hazard; by geographic
location; etc.) would be highly useful. Moreover, flexible search and visualization capabilities
might be envisioned; however, it is apparent that a substantial amount of work would be
required to amend the spreadsheet and its underlying database to enable that. For example,
the sensor list may include a specific type of sensor, but in order to be able to visualize it
geographically one needs metadata for all those sensors individually (which may be hundreds
across the country), including their location and other relevant information.
Another limitation of the spreadsheet is that it builds upon the functional requirements only
without having performance requirements associated with it. The latter have not been
available to the assessment team thus far. Matching up functional and performance
requirements will be a key effort for FY10 and provide the basis for more in-depth gap analyses.
For example, there may be an existing detection capability that satisfies a functional
requirement; however, the performance requirements might not be achievable at present,
which leaves a gap.
Hereafter, some of the most critical platforms and their perceived risk for IOC and beyond are
addressed briefly.
Ground-based weather observing systems
[insert some statements about ASOS and LWE etc.]
Wind-shear detection systems
The terminal-area wind-shear sensing requirements are some of the most critical observational
tasks within the NAS. Microbursts along the paths of approach, landing, and departure are the
deadliest weather phenomenon for aviation. With this in mind, we summarize the expected
state of terminal wind-shear sensors in the future.
The Terminal Doppler Weather Radar (TDWR) is the most capable (and most costly) wind-shear
detection system. It first became operational in May 1994, was fully deployed by January 2003
and expected to be decommissioned by 2012. However, a service life extension program (SLEP)
was approved and is currently ongoing (anticipated to be done by the beginning of 2013), which
will push off the end of its life to about 2019. The SLEP has now progressed to the point where
it is reasonable to expect that the TDWR will be operational well beyond IOC, so the risk is small
for the TDWR at IOC.
Another radar-based wind-shear detector is the Weather Systems Processor (WSP), which is a
processing system piggybacked onto the Airport Surveillance Radar-9 (ASR-9). In this particular
case, the lifetime depends on both systems. The ASR-9 (initially operational in May 1989 and
fully deployed in September 2000) is expected to go completely out of service by the end of
2025, so it will be safe for IOC. The WSP, originally slated for end of service by 2011, is
undergoing a technology refresh (TR) that will extend its life to 2017. The TR is in the
deployment stage, so the WSP also appears to be in good shape for IOC.
As for the anemometer-based wind-shear detection systems, the Low-Level Wind-shear Alert
System (LLWAS) 2, which would have gone out of service by 2014, will be upgraded through the
relocation and sustainment (RS) program (to be completed by the end of 2012). The new
LLWAS-RS system will then be scheduled for a 2019 decommissioning date. Another version of
LLWAS—the Network Expansion and software rehost (NE++), itself an upgrade—is slated to be
operational through 2018. The LLWAS systems appear to carry low risk for IOC.
Finally, a Doppler lidar has been installed at the Las Vegas airport to supplement coverage by
the TDWR in areas of extreme road clutter, a problem made more intractable for the radar due
to the presence of low-reflectivity dry microbursts there. The lidar is expected to become
operational by the end of 2010. The hardware is a commercial off-the-shelf product. At the
present, there are no plans to deploy this system at other locations.
Beyond IOC, the fate of these FAA-owned wind-shear sensors is unclear. The EA Weather
Roadmap calls for investment decisions regarding further SLEP and TR for these sensors in 2010
(initial) and 2012 (final). An even bigger decision point looms in 2016, with a wider range of
options such as the replacement of terminal wind-shear detectors and all weather surveillance
radars (including NEXRAD) with a multifunction phased array radar (MPAR).
Airborne wind-shear detectors, operating on the data provided by the weather radar in the
aircraft’s nose cone, are an important supplement to the ground-based systems. These so-
called predictive wind shear (PWS) radars, however, are not capable enough to replace their
earth-bound counterparts (Hallowell et al. 2009). The equipage rate of commercial aircraft
with PWS radars has increased over time (up to 67% in September 2007, Hallowell et al. 2009).
It is only expected to grow in the future, although regional jets (and certainly most general
aviation aircraft) may not have enough real estate up front for an effective PWS radar.
Weather radar systems
The NEXRAD is, overall, one of the most indispensable weather observation systems. Its
combination of spatiotemporal resolution and en route domain coverage is unmatched by any
other sensor network or satellite instruments. As with many of the other radars, the NEXRAD is
currently undergoing an upgrade. The initial phase of transforming the signal processing and
product generation platforms into open systems has been completed, and now the dual-
polarization hardware renovation is ongoing. The current schedule calls for the dual-
polarization system to be deployed nationwide by September 2012. This is clearly a risk for
IOC, especially since the operational implementation of software builds that incorporate dual-
polarization product algorithms will likely lag the hardware schedule. There are observational
requirements that depend on the availability of dual-polarization radar products (mostly having
to do with hydrometeor classification), which, therefore, may not be met at IOC. Beyond IOC,
NEXRAD is also subject to the EA Weather Roadmap decision point in 2016, when replacement
by MPAR will be considered.
Satellite weather observing systems
[insert some statements about satellite sensors]
Key platforms for the space weather requirements include ACE, GOES, and SOHO. On the ACE
satellite the EPAM, MAG, SIS, and SWEPAMA instruments that compose the Real-Time Solar
Wind (RTSW) data stream are critical. These data are used to predict and monitor geomagnetic
storms activity. The radiation and geomagnetic field sensors on the GOES satellites are critical.
These sensors play a key role in monitoring radiation levels especially for solar radiation
monitoring. The LASCO coronagraph on SOHO is crucial for observing solar flares and coronal
mass ejections. It is important to note that a fraction, including ACE and SOHO, of the sensors
identified in this catalog are operated by NASA as scientific missions and as such are not
guaranteed to be operation at IOC or beyond.
4.2 Gaps
A wide array of potential gaps may exist, as defined in section 3.3. An in-depth gap analysis will
be performed in FY10. Some obvious gaps based on a preliminary analysis of the functional
requirements are compiled in three tables, listing gaps associated with ground-based sensors
(Table 4.1), radar/lidar sensors (Table 4.2), and airborne/spaceborne sensors (Table 4.3). These
tables provide a “high glance value” and, therefore, we decided not to repeat the respective
contents here in text form also. Instead, the comments below are meant to take a look at types
of gaps from a higher perspective.
Human observations
Current aviation operations include a significant amount of human-based, visual observations.
These include, for example, dust/sand swirls or storms; funnel clouds and waterspouts; blowing
spray, snow, sand and widespread dust in terminal area; airport, tower and runway visibility;
and biological hazards such as birds; among many others. For most of these phenomena there
exists little or no automated observing capability, and this situation will likely hold for IOC.
Data access and utilization
There are a lot of potentially useful data out there, but they may not be available in real time
(e.g., because of restrictions or communication bandwidth) or have limited data standards and
quality (e.g., TAMDAR). [list some examples] Thus, one needs to facilitate better and more
timely access to existing platforms and sensors, data from entire networks, and across national
borders. In addition, investments need to be made in terms of data quality, metadata,
communications, networking, and shared access.
Moreover, the currently available data may not be utilized to their fullest extent. For example,
data may be available in real time, but algorithms (or data assimilation schemes) have yet to be
developed in order to make better use of them. [provide more “meat to the bones” and add
examples]
A lot of data are being used to initialize numerical weather prediction (NWP) models. The NWP
forecast performance skills depend strongly on how well the observations may resolve
horizontal and vertical gradients in moisture, temperature, and pressure fields. Moreover, the
model physics representing the atmospheric boundary layer, cloud and precipitation, and
radiation processes require further improvements. For example, higher resolution modeling
demands a better understanding of the physical processes that may have been parameterized
in coarser resolution models. High resolution modeling, such as provided by the High-
Resolution Rapid Refresh (HRRR) model, resolves convective-scale processes yielding much
more realistic looking storms that may also move at more realistic speed. Such model
advancements, however, have to go hand-in-hand with development of observational
capabilities to provide relevant data to test model performance against. NWP model prediction
failures can often be traced back to a lack of relevant mesoscale observations (e.g., boundary
layer and tropospheric temperature, humidity, wind, and stability profiles).
[add other issues]
Weather phenomena
Wake vortex:
Among the functional requirements is a set of entries concerning wake vortex observation at
designated airports (determine location—horizontal and vertical displacement—and
dissipation). Although Doppler lidars have been used for this task in research projects,
currently there is no plan to deploy lidars at airports for wake vortex detection, nor have
operational products been developed for meeting these specific requirements. Instead, the EA
Weather Roadmap calls for wake turbulence mitigation systems to be implemented, which do
not provide direct observations of wake vortices, but utilizes wind forecasts to predict their
average movement. In this instance the EA roadmap is not exactly aligned with the NextGen
requirements, thus leaving a gap.
There are some FAA and NASA sponsored research programs that investigate wake vortex
detection (using ground-based lidar) and forecasting. For the latter application, a vertical
profile of winds, stability and turbulence (EDR) are needed. MDCRS data can provide this
information, although the current turbulence downlinks may not have adequate vertical
resolution for the vortex problem. In addition, boundary layer wind profilers (with radio
acoustic sounders for temperature) could also provide valuable information.
Microburst and low-level wind shear motion:
The requirement to determine the speed and direction of microbursts (as well as low-level wind
shears) is not currently met, nor are there plans to do so for IOC. It is, however, possible to
develop such a capability utilizing radar-derived microburst detection and storm motion
information.
Vertical extent of low-level wind shear:
Radar observation of low-level wind shear is conducted using only the minimum elevation angle
(surface) scan. Currently there is no attempt at determining the vertical extent of the wind
shear. In principle, such a determination is possible by utilizing data from multiple elevation
scans, but it would be limited by the radar antenna beamwidth and the viewing geometry. A
Doppler lidar would have the desired vertical resolution, but it is strongly limited in range by
precipitation and cloud attenuation.
Tornado, waterspout, and funnel cloud:
There are separate observation requirements for tornado, waterspout, and funnel cloud. A
waterspout is a tornado over a body of water (as opposed to over land). A funnel cloud is a
funnel-shaped condensation cloud associated with a violently rotating column of air that is not
in contact with the Earth’s surface. It is the separation from the surface that distinguishes it
from a tornado or waterspout. There is a radar-based product called the tornado vortex
signature (TVS), but it does not distinguish between these three phenomena. Separating over-
water vs. overland events is a simple matter, but determining if the spinning column is touching
the ground (or water surface) is not a straightforward task. TVS also does not report intensity,
which is a requirement.
Well-developed dust/sand whirls:
The American Meteorological Society (AMS) definition of a sand whirl or well-developed dust
whirl is a dust devil. Currently there is no sensor product for dust devil detection. A dust devil
has diameter 3 to 30 m with an average height of about 200 m. In general, this is too small and
low for resolution and coverage by the existing network of radars as well as satellites. A
specialized, high-resolution (short wavelength) radar might be used for observation, as well as a
Doppler lidar, but a product would need to be developed that distinguishes dust devils from
other phenomena.
Virga:
Falling shafts of hydrometeors that evaporate before reaching the ground are called virga.
Weather radars can observe the precipitation aloft associated with virga, but due to the
elevated minimum beam heights they may not be able to detect a precipitation-free zone
beneath the precipitation aloft. There is presently no sensor product for virga identification.
Such a product could be developed using weather radar data combined with high-density
ground observation data of precipitation.
Squall:
A squall is a strong wind with a sudden onset, duration of the order of minutes, and a rather
sudden decrease in speed. A squall line is a line of active thunderstorms, including contiguous
precipitation areas due to the storms. The functional requirements have entries that refer to
squall observations (JPDO 2008), while the preliminary portfolio requirements use the term
squall line (Moy 2008). Thus, there is ambiguity about which phenomenon is the subject of the
official requirements. In either case, there is currently no specific sensor product that
addresses the requirements for location, movement, and time. However, there does not
appear to be any significant technical obstacle to developing such a product. For example, in
the case of a squall line, the convective weather forecast algorithm in the Corridor Integrated
Weather System (CIWS) internally classifies weather into line storms, different types of cells,
and stratiform precipitation.
Gravity Waves:
Gravity waves (or buoyancy waves) are also an aviation hazard not specifically targeted by the
functional requirements. Low-altitude wind shears due to these waves as well as clear-air
turbulence generated by breaking waves at high altitude are dangers to aircraft. The National
Research Council (NRC) report on mesoscale meteorological sensing needs (NRC 2008) points
out gravity waves as an important phenomenon for observation. While specific radar products
are generated for low-level wind shear due to microbursts and gust fronts, no such product
exists for gravity-wave induced wind shear; thus, aircraft may be exposed to this dangerous
phenomenon even where there is coverage by appropriate radars (Bieringer et al. 2004).
Environment [not assessed in FY09]
Environmental impacts from aviation-related activities are getting increasing attention.
NextGen will have to be concerned and deal with issues related to noise and emission pollution
near airports, such as exhaust from airplanes taking off or deicing fluids getting into ground
water systems. Moreover, contrails provide a non-negligible effect on the radiation balance
and thus climate. Monitoring and prediction capabilities will have to be developed to quantify
and minimize environmental impacts. [may need to provide more details . . .]
Bird strikes and wildlife incursions
Bird strikes have long been recognized as a critical hazard for aircraft. The January 15, 2009
multiple bird-strike event that brought down US Airways Flight 1549 into the Hudson River
garnered widespread public attention and angst. Given the high incidence rate of bird strikes,
some view it as only a matter of time before a disaster strikes. Meanwhile, commercial bird
detection radars are available, and research has shown that data from existing FAA radars can
be effectively used for bird detection (Troxel 2002). And yet, there is no bird detection
requirement. [Is there a NextGen bird detection requirement in some other portfolio?]
Although birds are not exactly an atmospheric phenomenon, the same sensors and techniques
used to observe weather can be applied for bird detection, so it would be pragmatic to place
this aviation hazard under the aegis of weather observation.
Bird strikes and other wildlife incursions (e.g., turtles on runways, as observed at JFK airport this
year on need date here) do not show up in the functional requirements (a gap in itself?) and
thus were not particularly addressed as part of the FY09 IOC assessment.
Note that the FAA and USDA have begun a sponsored program to evaluate the feasibility of
commercial radar systems to detect and track birds in the airport environment. The FY09/10
efforts are focused on the evaluation of one vendor’s system.
Volcanic ash [not assessed in FY09]
Volcanic eruptions may generate ash clouds that reach the tropopause within 5 minutes.
Moreover, these ash plumes may persist for a long time and travel around the globe. NRL and
AFWA have groups that develop capabilities to predict travel of ash plumes. A comprehensive
gap assessment for the ash cloud problem requires broader agency participation (i.e., USGS and
NOAA).
[add more information here?]
Space Weather
Space weather impact on aviation is a relatively new field and therefore is likely to contain
significant gaps in both knowledge and sensors. The specified requirements focused on
geomagnetic activity and radiation levels, but neglected the space weather impacts on Global
Navigation Satellite Systems (GNSS) as well as on communications. Current operations within
the Space Weather Predication Center (SWPC) only uses one ground base magnetometer as
part of its monitoring of geomagnetic activity. This greatly limits the spatial extent of these
activities.
There is a need to evaluate the potential benefit of using GPS occultation (especially GPS/LEO
links) to provide information regarding TEC in the ionosphere.
Weather radar [this may have to go elsewhere, be reduced or omitted; it seems out of place
here]
Radar data is likely the most valuable resource for forecasters and modelers for weather hazard
detection and prediction. While the NEXRAD coverage, ascertained by 143 mile radii around
the radars, “covers” nearly the entire United States lower 48, 70% of the boundary layer is
unobserved because of earth curvature and blockage effects. Thus, while being an invaluable
resource, there are also large coverage gaps, which if covered, would provide valuable new
information.
The most commonly known NEXRAD measurements are the radar moments of reflectivity (Z),
velocity (V) and spectrum width (W). These variables are, however, indirect indicators of
weather hazards. Reflectivity cannot alone distinguish between hail, rain, and ground clutter.
Radar derived velocity is a measure of the radar radial speed of hydrometers (not true wind
speed). Spectrum width potentially is a valuable measurement, however, its measurements
has historically been plagued by high measurement uncertainty. Additionally, radar data
quality is compromised by ground clutter echo, biological scatters, RF interference, second trip
echoes, velocity ambiguities, beam spreading effects and calibration issues.
Vital to the effective use of radar data is 1) data quality control 2) algorithm development for
the detection of the weather parameters and phenomena of interest.
There are other derived radar variables of interest besides Z, V and W. For example, NCP/SQI
(Normalized Coherent Power/Signal Quality Index) can be used as data quality indicators.
Recently CPA (Clutter Phase Alignment) has been used for effective ground clutter
identification. Other potential informative variables may be derived from the radar times series
and associated spectra. The point is, radar data is a very valuable source of weather hazard
information. To use it properly and obtain the information imbedded in the data necessitates
the development of signal processing algorithms for better weather observations and
prediction.
Dual polarization of the NEXRADs promises to dramatically increase the amount of weather
information available to forecasters and users. However, there will be a need for significant
investment in data quality control, calibration, and signal processing algorithms and verification
to unlock this information. High quality, dual polarization data potentially can identify and
differentiate rain, hail, ice crystals, ice, graupal, and biological scatters. Additionally, dual
polarization radar my be able to detect volcanic ash plumes, forest fire plumes, and icing
hazard. Any phenomena that lofts or drops about 100 micron sized particles or larger in
sufficient concentrations into the atmosphere can potentially be detected and identified with
dual polarization data. The keys will be 1) investment in signal processing, 2) data quality
control and metrics, 3) availability of other data sources and 4) verification studies. For
example, the quality of the NEXRAD dual polarization data could be established via a field
experiment. If KFTG were dual polarized its data could be compared to the nearby CSU-CHILL
and S-Pol, two high quality, dual polarized S-band research radars.
4.3 Needs
A number of documents have been developed that compile functional, performance and other
requirements. Unfortunately, there is a fair amount of work needed to clean up these
documents (e.g., there are a number of odd or unrealistic requirements). It would be
particularly helpful to understand how these requirements came about—i.e., what needs drive
a functional capability and how have the associated performance requirements been
determined. Moreover, the functional and performance requirements have to be synchronized
with the overarching NextGen Weather Integration Plan (WIP); there are some apparent
discrepancies (e.g., list examples from turbulence).
The FY09 effort included an IOC sensor assessment based on utilizing functional requirements
only. Once the performance requirements will become available, further in-depth analyses will
have to be carried out to identify gaps, as defined in section 3.3. This will be the focus of the
team’s FY10 effort. Moreover, it will be important to facilitate close participation of data users
and sensor support agencies in this gap identification effort. This might be achieved through a
series of focused workshops and meetings that will shed light on particular aviation problems
that need to be mitigated. Moreover, it may also entail observing system sensitivity
experiments (OSSEs) that will elucidate benefits of increased sensor density and or ways to
combine sensor data fusion with numerical modeling.
The FY10 in-depth gap analyses will likely reveal needs to develop new sensing capabilities, for
example, to augment or replace the human observations in order to meet NextGen
requirements of a largely automated system. Moreover, it is to be expected that sensor
enhancements will be needed to meet performance requirements, or deployment of additional
sensors required to satisfy spatial coverage. Given the time it takes to get things into
operations, new key sensor deployments need to start soon. More advanced capabilities
utilizing dynamic adaptation and control may need to be developed to mitigate the occasional
dynamic gap, for example, evolving from a power, communication, or sensor failure. And last,
but not least, there is a need to determine the cost/benefit ratio for “low-hanging fruit” cases in
order to meet IOC/MOC requirements.
[probably many more …]
Space weather monitoring and prediction for aviation applications will benefit greatly from the
deployment of dedicated sensors instead of relying on scientific NASA missions (any others?).
4.4 Opportunities
There will be a variety of opportunities to enhance the current observing capabilities toward
satisfying NextGen requirements. Given a limited amount of time and money to get such
enhancements into operations, it will be important to focus on low-cost high-benefits activities.
A number of “low-hanging fruit” opportunities are listed below, albeit without any cost/benefit
assessment or attempt to prioritize them.
Runway crosswind and windshear
          Access to one-second LLWAS data (as opposed to the usual 10 second reports) could
           be very useful in determining runway hazards due to turbulent winds. These data
           exist on the sensors, but are not transmitted.
          LLWAS data can be used to estimate runway crosswinds. The sensors and data exist,
           all that is needed is algorithms to compute the crosswind, determine if the value is
           above what is deemed a hazardous level, and then generate a text message that can
           be displayed on the current LLWAS ribbon displays.
Turbulence
          Increase the number of aircraft that are reporting turbulence (EDR) over CONUS,
           and begin deploying the EDR algorithm on aircraft types that fly oceanic routes.
           ICAO has already determined that EDR is the turbulence parameter that should be
           down-linked from commercial aircraft. Nevertheless, there is no consistency
           between what ICAO recommends (and what the FAA is deploying) and aircraft in the
           WMO AMDAR and ASDAR programs.
          TDWR data may be used for convective turbulence detection in the terminal area.
           Algorithms have already been developed for WSR-88D radars (i.e., NTDA). These
           algorithms can be adapted to work on the TDWR data stream.
          Although the NTDA algorithm is implemented on the NEXRAD, the data can’t be
           accessed at this time. Coordination between the FAA and NWS to facilitate access to
           the NTDA data would greatly enhance the turbulence monitoring and forecast
           product generation.
          Airborne radars are being shipped that have convective turbulence detection
           capabilities; however, any alert information generated by these systems stays
           onboard. These data should be down-linked for integration into the GTGN
           turbulence nowcasting product being developed for IOC.
          NOAA is planning to deploy a number of ground-based GPS receivers. These data
           could be utilized to calculate turbulence information. (However, the methodology
           developed for use with GPS-LEO and GPS-aircraft links need to be evaluated for their
           effectiveness for GPS to ground-based receiver links.)


Weather in cockpit
          Technology exists to uplink weather information (both convection and turbulence)
           into the cockpit, which would be highly beneficial for oceanic routes. It would be
           straightforward to resurrect earlier uplink experiments and elevate them to more
           permanent status.


LWE and aircraft or runway deicing
          Large airports may not have enough sensors distributed across the airport. The
           spatial representation of a single snowfall rate or visibility measurement is likely not
           representative.
          The ASOS have only the freezing rain algorithm turned on, but others could be
           turned on as well. [need a bit more explanation]


Fog problem
          ____ [need a statement here]


Space Weather
          A significant early opportunity for space weather related sensor deployment is the
           DSCOVR platform that is currently being considered for operational deployment by
           NOAA after transferring the hardware from NASA. Maximal utility for NextGen
           needs would be gained if this platform contains both solar wind instruments as well
           as a chronograph.


[add more examples from different areas]
4.5 Overall lessons learned
There are several efforts underway to assess sensor networks, including NOAA’s evaluation of
its observing capabilities or the National Research Council’s network of networks study (NRC
2009).     The National Science Foundation (NSF) has compiled an extensive list of
instrumentation and observing networks as well. And the Office of the Federal Coordinator of
Meteorology (OFCM) is facilitating workshops to discuss similar efforts across agencies. The
NRC report highlights that a nationwide coordination is needed, but it isn’t clear who will be
emerging as the champion to lead this coordination and how the guiding principles (e.g.,
policies and incentives) have to be worked out in order to make this happen in an effective way.
Through the Rights Sizing effort, the FAA is conducting an assessment of its observational
assets. The FY09 effort included compilation of a spreadsheet/database (i.e., sensor catalogue)
that turned out to be tedious work tempting one to easily lose sight of the most important
aspects (i.e., “loosing sight of the forest while studying every tree in it”). Moreover, the FAA is
utilizing a lot of observations from sensors that it doesn’t own, which requires coordination
across agencies (especially with NOAA, NASA, among others). This coordination across agencies
needs to be enhanced for the overall effort to be most effective.
The sensor catalogue provides for a comprehensive list of assets, but it will not be
straightforward to digest that wealth of information (some of it less relevant) in terms of an in-
depth gap analysis. Part of the problem is the amount of information collected, but also its
organization—i.e., there are multiple dimensions to the content that cannot easily be extracted
from the spreadsheet/database. Moreover, the performance requirements remain to be
sorted out in the context of the spreadsheet/database—a non-trivial effort in its own right.
A series of focused workshops, including a broad team composition with representatives from
both the weather and aviation user community, will likely be an effective and complementary
way to gather information about potential gaps for specific problem situations. Moreover,
these information-gathering sessions will provide a number of suggestions on how to improve
or mitigate the shortcomings (i.e., “low-hanging fruits” to be explored).
[there may be a few other things that we can point out . . .]

								
To top