www.webMET.com - your meteorological resource center
United States
Canada
EPA Models
Wind Rose
Percentiles
Met Station
Met Monitoring Guide
Met Data
Modeling
Digital Terrain Data
Books
Videos
Met Facts
Forecast
Links
About WebMET.com
Contact Us



9.6.2 System and Performance Audits

Audits of upper-air instrumentation to verify their proper operation pose some interesting challenges. While system audits can be performed using traditional system checks and alignment and orientation techniques, performance audits of some instruments require unique, and sometimes expensive procedures. In particular, unlike surface meteorological instrumentation, the upper-air systems cannot be challenged using known inputs such as rates of rotation, orientation directions, or temperature baths. Recommended techniques for both system and performance audits of the upper-air instruments are described below. These techniques have been categorized into system audit checks and performance audit procedures for radiosonde sounding systems, radar wind profilers, sodars, and RASS.

Systems Audit

System audits of an upper-air station should include a complete review of the QAPP, any monitoring plan for the station, and the station's standard operating procedures. The system audit will determine if the procedures identified in these plans are followed during station operation. Deviations from the plans should be noted and an assessment made as to what effect the deviation may have on data quality. To ensure consistency in the system audits, a checklist should be used. System audits should be conducted at the beginning of the monitoring program and annually thereafter.

Radiosonde Sounding Systems For radiosonde sounding systems, an entire launch cycle should be observed to ensure that the site technician is following the appropriate procedures. The cycle begins with the arrival of the operator at the site and ends with completion of the sounding and securing of the station. The following items should be checked:

  • Ground station initialization procedures should be reviewed to ensure proper setup.


  • Sonde initialization procedures should be reviewed to verify that the sonde has been properly calibrated.


  • Balloon inflation should be checked to ensure an appropriate ascent rate.


  • Proper and secure attachment of sonde to balloon should be verified.


  • Orientation of the radio theodolite antenna should be checked, using solar sitings when possible. The antenna alignment should be maintained within ±2°.


  • The vertical angle of the radio theodolite antenna should be checked and should be within ±0.5°.


  • Data acquisition procedures should be reviewed and a sample of the acquired data should be inspected.


  • Data archiving and backup procedures should be reviewed.


  • Flight termination and system shutdown procedures should be reviewed.


  • Preventive maintenance procedures should be reviewed and their implementation should be checked.


  • Data processing and validation procedures should be reviewed to ensure that questionable data are appropriately flagged and that processing algorithms do not excessively smooth the data.


  • Data from several representative launches should be reviewed for reasonableness and consistency.


  • Station logbooks, checklists, and calibration forms should be reviewed for completeness and content to assure that the entries are commensurate with the expectations in the procedures for the site. Remote Sensing Instrumentation A routine check of the monitoring station should be performed to ensure that the local technician is following all standard operating procedures (SOPs). In addition, the following items should be checked:


  • The antenna and controller interface cables should be inspected for proper connection. If multi-axis antennas are used, this includes checking for the proper connection between the controller and individual antennas.


  • Orientation checks should be performed on the individual antennas, or phased-array antenna. The checks should be verified using solar sitings when possible. The measured orientation of the antennas should be compared with the system software settings. The antenna alignment should be maintained within ±2°.


  • For multi-axis antennas, the inclination angle, or zenith angle from the vertical, should be verified against the software settings and the manufacturer's recommendations. The measured zenith angle should be within ±0.of the software setting in the data system.


  • For phased-array antennas, the array should be level within ±0.of the horizontal.


  • For multi-axis sodar systems, a separate distinct pulse, or pulse train in the case of frequency-coded pulse systems, should be heard from each of the antennas. In a frequency-coded pulse system there may be a sound pattern that can be verified. The instrument manual should be referenced to determined whether there is such a pattern.


  • For sodar systems, general noise levels should be measured, in dBA, to assess the ambient conditions and their potential influence on the performance of the sodar.


  • The vista table for the site (see Section 9.5) should be reviewed. If a table is not available then one should be prepared.


  • The electronic systems and data acquisition software should be checked to ensure that the instruments are operating in the proper mode and that the data being collected are those specified by the SOPs.


  • Station logbooks, checklists, and calibration forms should be reviewed for completeness and content to assure that the entries are commensurate with the expectations in the procedures for the site.


  • The site operator should be interviewed to determine his/her knowledge of system operation, maintenance, and proficiency in the performance of quality control checks.


  • The antenna enclosures should be inspected for structural integrity that may cause failures as well as for any signs of debris that may cause drainage problems in the event of rain or snow.


  • Preventive maintenance procedures should be reviewed for adequacy and implementation.


  • The time clocks on the data acquisition systems should be checked and compared to a standard of ±2 minutes.


  • The data processing procedures and the methods for processing the data from sub-hourly to hourly intervals should be reviewed for appropriateness.


  • Data collected over a multi-day period (e.g., 2-3 days) should be reviewed for reasonableness and consistency. The review should include vertical consistency within given profiles and temporal consistency from period to period. For radar wind profilers and sodar, special attention should be given to the possibility of contamination of the data by passive or active noise sources.


Performance Audit and Comparison Procedures

Performance audits should be conducted at the beginning of the monitoring program and annually thereafter. A final audit should be conducted at the conclusion of the monitoring program. An overview of the recommended procedures for performance auditing is provided below.

Radiosondes Performance auditing of radiosonde sounding systems presents a unique challenge in that the instrument is used only once and is rarely recovered. Therefore, a performance audit of a single sonde provides little value in assessing overall system performance. The recommended approach is to audit only the instruments that are used to provide ground truth data for the radiosondes prior to launch (thermometer, relative humidity sensor, psychrometer, barometer, etc.). The reference instruments used to audit the site instruments should be traceable to a known standard. Details on these audit methods can be found in reference [2].

In addition, a qualitative assessment of the direction and speed of balloon travel should be made during an observed launch for comparison with the computed wind measurements. An alternative approach is to attach a second sonde package to the balloon, track it from an independent ground station, and compare the results of the two systems. An optical tracking system is adequate for this type of comparison.

Remote Sensing Instrumentation Methods for performance audits and data comparisons of remote sensing instrumentation have been under development for a number of years. Only recently has interim guidance reference [2] been released to help standardize performance audit methods. Even with the release of that guidance, there are still a number of areas undergoing development. Recommended procedures for performance audits and data comparisons of remote sensors which are presented below typically incorporate inter-comparison checks. If inter-comparison checks are used, a quick review of the datasets should be performed before dismantling the comparison system.

Sodar. The performance audit is used to establish confidence in the ability of the sodar to accurately measure winds. A performance audit of a system typically introduces a known value into the sensor and evaluates the sensor response. It may not be possible to perform this type of audit for all types of sodar instruments. In this case, a comparison between the sodar and another measurement system of known accuracy should be performed to establish the reasonableness of the sodar data. With any of the audit or comparison methods, the evaluation of the data should be performed on a component specific basis that corresponds to the sodar beam directions. Any of the following approaches may be considered in the sodar performance evaluation.

  • Comparison with data from an adjacent tall tower. Using this approach, conventional surface meteorological measurements from sensors mounted on tall towers (at elevations within the operating range of the sodar) are compared with the sodar data. This method should only be used if the tall tower is an existing part of a monitoring program and its measurements are valid and representative of the sodar location. At least 24 hours of data should be compared. The tower data should be time averaged to correspond to the sodar averaging interval and the comparisons should be made on a component basis. This comparison will provide an overall evaluation of the sodar performance as well as a means for detecting potential active and passive noise sources.


  • Comparison with data from another sodar. This comparison uses two sodars operating on different frequencies. The comparison sodar should be located in an area that will allow it to collect data that is representative of the site sodar measurements. At least 24 hours of data should be collected for the comparison. If the measurement levels of the two sodars differ, the comparison sodar data should be volume averaged to correspond with the site sodar. Additionally, the comparison sodar time averaging should correspond to the site sodar. As with the adjacent tall tower, the comparison should be performed on a component basis. This comparison will provide an overall evaluation of the sodar performance as well as a means for detecting potential active and passive noise sources.


  • Comparison with radiosonde data. This comparison uses data obtained from a radiosonde carried aloft by a free-flight, slow-rise balloon. The balloon should be inflated so the ascent rate is about 2 ms -1 . This will provide the appropriate resolution for the comparison data, within the boundary layer. The wind data should be volume averaged to correspond with the sodar data and the comparisons should be made on a component as well as a total vector basis. The launch times should be selected to avoid periods of changing meteorological conditions. For example, evaluation of the comparison data should recognize the potential differences due to differences in both the spatial and temporal resolution of the measurements (i.e., the instantaneous data collected by the radiosonde as compared with the time averaged data collected by the sodar). This comparison will provide an overall evaluation of the sodar performance as well as a means for detecting potential active and passive noise sources.


  • Comparison with tethersonde data. The tethersonde comparison is performed using single or multi-sonde systems. Using this approach, a tethered balloon is used to lift the sonde(s) to altitude(s) corresponding with the sodar measurement levels. This method should collect data at one or more layers appropriate to the program objectives. At a minimum, data corresponding to the equivalent of five sodar averaging periods should be collected at each altitude. Multiple altitudes can be collected simultaneously using a multi-sonde system with two or more sondes. The individual sonde readings should be processed into components that correspond to the sodar beam directions and then time averaged to correspond to the sodar averaging period. This comparison will provide an overall evaluation of the sodar performance as well as a means for detecting potential active and passive noise sources.


  • Comparison with data from an anemometer kite. This measurement system is suitable in relatively high wind speed conditions that would preclude the use of a tethersonde. The kite anemometer consists of a small sled type kite attached to a calibrated spring gauge. Horizontal wind speeds are determined from the pull of the kite on the spring gauge. The altitude of the kite (i.e. the height of the measured wind) is determined from the elevation angle and the distance to the kite. The wind direction is determined by measuring the azimuth angle to the kite. At a minimum, data corresponding to the equivalent of five sodar averaging periods should be collected at a level appropriate to the monitoring program objectives. The wind speed and kite azimuth and elevation readings should be taken every minute. The individual readings should be processed into components that correspond to the sodar beam directions and then time averaged to correspond to the sodar averaging period. This comparison will provide an overall evaluation of the sodar performance as well as a means for detecting potential active and passive noise sources.


  • Use of a pulse transponding system. A pulse transponding system provides a means of testing the sodar system processing electronics for accuracy through the interpretation of simulated Doppler shifted signals at known time intervals [104]. This method can be considered an audit rather than a comparison because it provides a signal input equivalent to a known wind speed, wind direction and altitude to test the response of a sodar system. At least three averaging periods of transponder data should be collected with the sodar in its normal operating mode. Depending on the sodar configuration, this method along with an evaluation of the internal consistency of the sodar data to identify potential passive and active noise sources, may serve as the performance audit without the need of further comparisons. In the case of phased array sodars, an additional comparison is needed to verify proper beam steering. This comparison may be performed using any of the methods above. For this check, three sodar averaging periods at a single level are sufficient. It should be noted that current transponder technology is limited to sodars with three beams.


Radar Wind Profilers. At present, the performance of radar wind profilers can only be evaluated by comparison to collocated or nearby upper-air measurements. Various types of comparison instruments can be used including tall towers, sodar, radiosonde sounding systems, and tethersondes. A tethersonde may be used, but care should be taken to ensure that it does not interfere with the radar operation. Since it is important to have confidence in the reference instrument, an independent verification of operation of the reference instrument should also be obtained. If using a sodar or a radiosonde sounding system, the procedures outlined above should be followed to ensure acceptable operation of the system. If data from an adjacent tower are used, then it is recommended that the quality of the tower-based data be established. The comparison methods should follow those described for sodars above. Where RASS acoustic sources may interfere with the comparison sodar operation, care should be taken to identify potentially contaminated data.

RASS. Like the radar wind profiler, the evaluation of a RASS relies on a comparison to a reference instrument. The recommended method is to use a radiosonde sounding system to measure the variables needed to calculate virtual temperature (i.e., pressure, temperature, and humidity). Sufficient soundings should be made for comparisons during different times of the day to evaluate the performance of the system under different meteorological conditions. Data collected from the sonde should be volume averaged into intervals consistent with the RASS averaging volumes, and the values should be compared on a level-by-level and overall basis.

9. UPPER-AIR MONITORING 
9.1 Fundamentals  
      9.1.1 Upper-Air Meteorological Variables  
     9.1.2 Radiosonde Sounding System  
     9.1.3 Doppler Sodar 
     9.1.4 Radar Wind Profiler 
     9.1.5 RASS  
 9.2 Performance Characteristics  
     9.2.1 Definition of Performance Specifications  
     9.2.2 Performance Characteristics of Radiosonde Sounding Systems 
     9.2.3 Performance Characteristics of Remote Sensing Systems  
 9.3 Monitoring Objectives and Goals  
     9.3.1 Data Quality Objectives  
 9.4 Siting and Exposure
 9.5 Installation and Acceptance Testing 
 
9.6 Quality Assurance and Quality Control 
     9.6.1 Calibration Methods  
     9.6.2 System and Performance Audits  
     9.6.3 Standard Operating Procedures 
     9.6.4 Operational Checks and Preventive Maintenance  
     9.6.5 Corrective Action and Reporting  
     9.6.6 Common Problems Encountered in Upper-Air Data Collection 
 9.7 Data Processing and Management (DP&M) 
     
9.7.1 Overview of Data Products  
     9.7.2 Steps in DP&M 
     9.7.3 Data Archiving  
 9.8 Recommendations for Upper-Air Data Collection 


webgis.com
Free Digital
Terrain Data &
GIS Resources



lakes
Leading Air Dispersion Modeling & Risk Assessment Software



courses
calpuff view
CALPUFF View
Advanced Air Dispersion Model


HOME | ABOUT | MET STATION EQUIPMENT
METFACTS | BOOKS | VIDEOS | FORECAST

Copyright © 2002 WebMET.com - Disclaimer