9.2.1 Definition
of Performance Specifications
Accuracy
is defined as the degree of agreement of a measurement with an accepted reference
or true value [2]. Determining the absolute accuracy of an upper-air
instrument through an inter-comparison study is difficult because there is
no “reference” instrument that can provide
a known or true value of the atmospheric conditions. This is due in part to
system uncertainties and inherent
uncertainties caused by meteorological variability, spatial and temporal separation of the measurements, external and
internal interference, and random noise. The only absolute
accuracy check that can be performed is on the system electronics, by
processing a simulated signal. Similarly,
a true precision, or the standard deviation of a series of measured values
about a mean measured reference value, can only be calculated using the
system responses to repeated inputs of
the same simulated signal.
The
performance specifications provided by manufacturers for accuracy,
precision, and other data quality
objectives are derived in a number of ways, and it is prudent to understand
the basis behind the published
specifications. Manufacturers' specifications may be derived from the results of inter-comparison studies, from what
the instrument system can resolve through the system
electronics and processing algorithms, or a combination of these methods. It
may not be practical for a user to verify
the exact specifications claimed by the manufacturers. What is needed,
however, is a means of verifying that the data obtained from an upper-air
system compare reasonably to observations
obtained from another measurement system. Guidance for system
acceptance testing, field testing, auditing, and data comparison is provided
in Section 9.6.
To
quantify the reasonableness of the data, one compares observations from the
upper-air system being evaluated to data
provided by another sensor that is known to be operating properly. In
assessing how well the sensors compare, two measures are commonly used. The
first involves calculating the
“systematic difference” between the observed variables measured by the
two methods. The second involves
calculating a measure of the uncertainty between the measurements,
which is referred to as the “operational comparability” (or simply “comparability”), as described in reference [100].
Comparability, for these purposes, is the root-mean- square
(rms) of a series of differences between two instruments measuring nearly
the same population. The comparability
statistic provides a combined measure of both precision and bias, and
will express how well the two systems agree.
Using
the ASTM notation [100], the systematic difference (or bias) is
defined as:

where
n
= number of observations
xa,i = ith observation of the sensor being evaluated
xb,i = ith observation of the “reference” instrument
Operational
comparability (or root-mean-square error) is defined as

Many
of the inter-comparison programs discussed in the next section have
evaluated instrument performance using the systematic difference and
comparability statistics described here.
Other statistical measures that can be used include, for example,
correlation coefficients and linear
regression.
Another
important performance specification for upper-air instrument systems is data
recovery rate. Data recovery is usually calculated as the ratio of the
number of obervations actually reported
at a sampling height to the total number of observations that could have
been reported so long as the instrument
was operating (i.e., downtime is usually not included in data recovery
statistics but is treated separately). Data recovery is usually expressed as
percent as a function of altitude.
Altitude coverage for upper-air data is often characterized in terms of the height up to which data are reported 80 percent
of the time, 50 percent of the time, etc.
9. UPPER-AIR MONITORING
9.1 Fundamentals
9.1.1 Upper-Air Meteorological Variables
9.1.2 Radiosonde Sounding System
9.1.3 Doppler Sodar
9.1.4 Radar Wind Profiler
9.1.5 RASS
9.2 Performance Characteristics
9.2.1 Definition of Performance Specifications
9.2.2 Performance Characteristics of Radiosonde Sounding Systems
9.2.3 Performance Characteristics of Remote Sensing Systems
9.3 Monitoring Objectives and Goals
9.3.1 Data Quality Objectives
9.4 Siting and Exposure
9.5 Installation and Acceptance Testing
9.6 Quality Assurance and Quality Control
9.6.1 Calibration Methods
9.6.2 System and Performance Audits
9.6.3 Standard Operating Procedures
9.6.4 Operational Checks and Preventive Maintenance
9.6.5 Corrective Action and Reporting
9.6.6 Common Problems Encountered in Upper-Air Data Collection
9.7 Data Processing and Management (DP&M)
9.7.1 Overview of Data Products
9.7.2 Steps in DP&M
9.7.3 Data Archiving
9.8 Recommendations for Upper-Air Data Collection