JAM WocsswSinton

Embed Size (px)

Citation preview

  • 8/7/2019 JAM WocsswSinton

    1/14

  • 8/7/2019 JAM WocsswSinton

    2/14

  • 8/7/2019 JAM WocsswSinton

    3/14

    MARCH 2000 337L U D W I G A N D S I N T O N

    TABLE 1. Surface observation sites used (with exception of Santa Rosa) to compare observations and model results (UTM is Zone 10Universal Transverse Mercator coordinates, AFB is Air Force Base, NAS is Naval Air Station, wind direction is measured in degrees orwith a 16-point compass, kt is knots, and mph is miles per hour). Dataset indicates whether site was used in 4-, 7-, and 11-station modelruns.

    Site Datasets

    Coordinates (UTM, km)

    East NorthElevation(MSL, m) Agency*

    Wind reporting units

    Direction Speed

    San RafaelTravis AFBConcordLivermoreAlameda NASOakland AirportHayward

    4, 7, 114, 7, 1111

    4, 7, 11

    542.0594.4585.4608.9560.0569.0577.0

    4199.04235.44199.44171.44180.04173.04167.0

    3719

    7121

    41

    14

    NWSUSAFFAAFAAUSNNWSFAA

    10101010101010

    1 kt1 kt1 kt1 kt1 kt1 kt1 kt

    San Francisco AirportSan CarlosPalo AltoSan Jose AirportReed HillviewSan Mateo BridgeFort Funston

    7, 11

    4, 7, 11

    11

    556.2566.4576.7595.1602.3569.6544.1

    4163.24152.44144.94135.44138.04159.24174.1

    312

    1741

    157

    NWSFAAFAAFAAFAANWSBAAQMD

    101010101010

    1

    1 kt1 kt1 kt1 kt1 kt1 kt1 mph

    Union CityMount Tamalpias

    Black DiamondBrionesNorth OaklandLas Trampas

    11581.9536.3

    597.0575.9567.1580.3

    4160.94197.7

    4207.04204.84199.24194.7

    2762

    488442396537

    BAAQMDBAAQMD

    CDF/RAWSCDF/RAWSCDF/RAWSCDF/RAWS

    11

    10101010

    1 mph1 mph

    1 mph1 mph1 mph1 mph

    South OaklandLivermorestation BRose Peak La HondaPoint BonitaGolden Gate Bridge

    7, 11

    572.4603.1609.3564.2541.6546.0

    4189.24180.34156.94135.84184.74185.0

    305244933130

    181

    CDF/RAWSCDF/RAWSCDF/RAWSCDF/RAWSNWS/DARDCNWS/DARDC

    101010101010

    1 mph1 mph1 mph1 mph1 kt1 kt

    Angel IslandDavis PointBuoy 12Pillar PointPort ChicagoRio VistaSouth San Francisco

    11

    549.5565.0526.6544.4586.0613.5549.4

    4191.44211.04139.14150.84211.04227.44168.1

    7318

    04012

    1213

    NWS/DARDCNWS/DARDCNDBCUSAFNWSUSCGNWS

    10101016 pt16 pt16 pt10

    1 kt1 kt1 kt1 kt1 kt1 kt1 kt

    Treasure IslandSan Francisco Pilot StationAlvisoMoffett FieldSan MateoBuoy 26Santa Rosa 7, 11

    555.0532.0592.7584.1564.3524.8521.0

    4186.04177.04143.44140.54158.14179.24252.0

    80

    1010

    20

    51

    NWS/USNNWS/USCGBAAQMDUSNPG&ENDBC/NWSNWS

    16 pt16 pt

    110

    11010

    1 kt1 kt1 mph1 kt1 m s 1

    1 kt1 kt

    * BAAQMDBay Area Air Quality Management District, CDFCalifornia Department of Forestry and Fire Protection, DARDCdevicefor automatic remote data collection, FAAFederal Aviation Administration, NDBCNational Data Buoy Center, NWSNational WeatherService, PG&EPacic Gas and Electric, RAWSremote automated weather station, USAFU.S. Air Force, USCGU.S. Coast Guard,USNU.S. Navy.

    Navy, U.S. Air Force, and others, so they might servea similar purpose. Tucker (1997) has pointed out the

    many problems associated with combining data fromdifferent networks with different purposes and differentsiting and operational standards. Nevertheless, there aresubstantial benets to be gained by combining data fromthe standard sources with those collected on well-con-ceived and well-maintained special networks, as hasbeen done here.

    The model will be discussed briey in the next sec-tion. Here, it is sufcient to note that its inputs are winddata from surface sites, plus whatever wind and tem-perature proles are available in or near the modeled

    region. In these analyses, upper-air observations werelimited to the most recent of the twice-daily Oakland

    radiosonde upper-air observations. We chose to analyzethe data for 0700 and 1900 local time so that the upper-level information would have been collected close tothe analysis time. For the most part, the surface obser-vations used are made hourly, but some coastal buoysreport at 3-h intervals. The selected test hours werechosen so that the buoy data also would be available.

    Inputs included observations from the NWS, FAA,U.S. Navy, U.S. Coast Guard, and U.S. Air Force.The Bay Area Air Quality Management District con-tributed data from their network of stations, as did cor-

  • 8/7/2019 JAM WocsswSinton

    4/14

    338 VOLUME 39J O U R N A L O F A P P L I E D M E T E O R O L O G Y

    FIG . 2. Schematic diagram showing the effect of the compressionfactor on ow surface conguration.

    porations such as Pacic Gas and Electric. Data werenot always available from all stations. The wind direc-tion data often were reported to the nearest 10 or, inmany cases, to the nearest 22.5 (16-point compass).Table 1 lists the 40 stations used for this study. Thirty-nine of the stations are within the model domain andwere used for comparisons of observed and modeledwinds. The remaining station, Santa Rosa, was used as

    an input for some runs but could not be compared withmodel results because it is outside the domain. The mod-el can use stations outside the domain, but, with theexception of Santa Rosa, we chose to use only data fromwithin the modeled area.

    Figure 1 shows the locations of all the sites super-imposed on a map of the San Francisco Bay Area. Thestations not used as inputs are marked with plus signs( ) in either black or white, depending on the mapbackground. Three different sets of input stations wereused, with 4, 7, and 11 stations, respectively. The 4-sta-tion set is identied in Fig. 1 by the circles containing4. The 7-station set added the three stations markedwith a 7, and the 11-station set included the above

    plus the sites marked with 11. The subsets of sitesalso are identied in Table 1. The calculation domainis outlined by the rectangle in the gure.

    The numbers of stations chosen were only somewhatarbitrary. We felt four to be the minimum number thatcould describe adequately the common ow featuresthat occur in the region. When more than 11 stationswere selected, there were many hours when at least oneof those stations was inoperative, thereby reducing thenumber of hours that could be used in the evaluationprocess (see below). The specic sites used as inputs

    were selected according to two criteria. First, they werechosen to be distributed spatially so that they wouldcapture as much of the expected spatial variability of ow in the region as possible. Second, the chosen siteshad more complete datasets than some of the others.The completeness of the datasets is important, becausethe test procedure requires data from all the stations; if any one input station is not reporting, then that hourcannot be used. Once the input sites were selected, thosetimes when data were available from all the input siteswere selected for use in the model runs. The number of hours available when data were collected at all stationsin the 4-, 7-, and 11-station sets was 461, 439, and 343,respectively.

    The initial data collection process is convoluted(Strach et al. 1997; Ludwig et al. 1997). The NWSautomatically polls and collects observations hourlyfrom many of the sites and uses the information to up-date the Automated Local Event Reporting in Real Time

    (ALERT) le maintained by the Monterey, California,NWS ofce. A personal computer (PC) at the Meteo-rology Department at San Jose State University (SJSU)polls the ALERT le and other available stations viathe phone system and a modem. These data are receivedin a raw form that includes a large amount of text thatmust be processed to produce formatted les for inputto the wind model. After the raw data les are received,they are transmitted over an Ethernet connection to aworkstation at SJSU and from there to a workstation atthe Center Weather Service Unit (CWSU) at the FAAControl Center in Fremont, California. This transmis-sion is over a dedicated digital phone line. The CWSUworkstation sends the raw data via le transfer protocol

    (FTP) to a PC that parses the raw data and createsles in the format used by the WOCSS model. Theparsing PC also runs a version of the WOCSS modelfor display at CWSU (Strach et al. 1997) and transmitsthe input data les back to the SJSU workstation, whichthen sends the parsed data via an Internet FTP connec-tion to a web server at the U.S. Geological Surveys(USGS) Western Region Ofce in Menlo Park, Cali-fornia. The data are stored on the USGS computer in aformat suitable for direct input into the wind analysismodel. The WOCSS code is used at the USGS to pro-vide the information for display on the Internet.

    One of the unique features of the dataset is that itcombines data from several different agencies. As a con-

    sequence, the spatial and temporal coverage is muchbetter than it would be if only data from a single sourcewere used. It is not unusual for a region to be coveredwith sites that are operated independently by a varietyof agencies. One of the lessons to be drawn from thisstudy is that there is much to be gained by making theeffort to consolidate data from the different sources intoa single archive with common formats. There are alsodrawbacks: instrumentation and quality control are notalways consistent, and the siting of the anemometersmay not be uniformly representative. Most of the agen-

  • 8/7/2019 JAM WocsswSinton

    5/14

    MARCH 2000 339L U D W I G A N D S I N T O N

    TABLE 2. Meteorological stratication categories.

    Meteorologicalcategory

    Hour(UTC)

    Speed at4000 ft(m s 1 )

    Directionat 4000 ft(quadrant)

    Inversionbase height

    (m)

    Inversionstrength( C m 1 )

    Number of available cases

    4 station 7 station 11 station

    1234567

    All cases03001500

    All casesAll casesAll casesAll cases

    All casesAll casesAll cases

    5555

    All casesAll casesAll casesNWSWNE and SENW

    All casesAll casesAll casesAll casesAll casesAll casesAll cases

    All casesAll casesAll casesAll casesAll casesAll casesAll cases

    461225236

    606054

    126

    439216223

    605452

    120

    343160183

    49434291

    89

    1011121314

    All casesAll casesAll casesAll casesAll casesAll casesAll cases

    55

    All casesAll casesAll casesAll casesAll cases

    SWNE and SEAll casesAll casesAll casesAll casesAll cases

    All casesAll casesNo inversion

    200200200200

    All casesAll cases

    0.030.30.030.03

    818085

    109132

    6867

    777681

    104124

    6565

    57616177

    1044952

    TABLE 3. Frequency of the joint occurrence of the meteorological categories in the 4-station input dataset. Dashes indicate categories thatare mutually exclusive.

    Category Category description

    Meteorological category

    1 2 3 4 5 6 7 8 9 10 11 12 13 14

    1234567

    All cases0300 UTC1500 UTCNW 5 m s 1

    SW 5 m s 1

    NE and SE 5 m s 1

    NW 5 m s 1

    461225236

    606054

    126

    225225

    32372552

    236

    23628232974

    60322860

    60372360

    54252954

    1265274

    126

    814635

    803347

    857213

    669

    28

    109466316101627

    13228

    10424161337

    685612

    513

    914

    672344

    915

    720

    89

    1011121314

    SW 5 m s 1

    NE and SE 5 m s 1

    No inversion200 m, 0.03 C m 1

    200 m, 0.03 C m 1

    200 m, 0.03 C m 1

    200 m, 0.03 C m 1

    818085

    109132

    6867

    46337246285623

    35471363

    1041244

    61624

    59

    610161315

    91613

    97

    2827371420

    812520121410

    8011203013

    6

    251185

    2020

    109

    1230

    132

    141368

    106

    67

    cies involved have their own routine maintenance andquality-control regimens, and, for the most part, thisstudy relied on those. One of the unanticipated benetsof the regular display of the wind data and analyses onthe Internet has been to provide an informal quality-control tool; there have been several occasions whenmalfunctions at specic sites have been identied. Eventhough the dataset has undeniable shortcomings, it stillhas provided a useful picture of model performance andthe factors that affect it.

    3. The wind analysis model

    The archived observations were used to generate windelds with the WOCSS approach described in detail byLudwig et al. (1991). This approach is an interpolationscheme that uses the critical dividing streamline concept(e.g., McNider et al. 1984; Sheppard 1956) in its treat-ment of terrain-induced vertical motion. It also invokesmass conservation constraints. It evolved from a sigma-coordinate wind-energy planning model of Bhumralkaret al. (1980) that applied a variational calculusnumerical

    scheme similar to that used by Sherman (1978) in herCartesian coordinate model. Endlich (1984) introducedan iterative technique (Endlich 1967) to remove diver-gence. He also introduced subjectively dened coordi-nate surfaces that were allowed to intersect the terrain,so that the ow was forced around the intersected to-pographical obstacles when the ow was adjusted to-ward two-dimensional nondivergence.

    Ludwig et al. (1991) adapted the critical streamlineconcept to provide an objective method for dening theow surface shapes. When potential temperature in-creases with height and atmospheric processes are ap-proximately adiabatic, there will be a height where thework done in displacing the air from its equilibriumposition against the buoyant restoring force equals itsoriginal kinetic energy. According to the critical stream-line concept, this equivalence of potential and kineticenergy denes the maximum obstacle height that canbe surmounted by the ow. The equality of potentialand kinetic energies can then be expressed in terms of Z max , the greatest height to which the air at height z 0

  • 8/7/2019 JAM WocsswSinton

    6/14

    340 VOLUME 39J O U R N A L O F A P P L I E D M E T E O R O L O G Y

    TABLE 4. Wind speed estimation performance using four observation sites (14 997 data pairs). Boldface indicates best performance,italics worst (percentiles combinedsee text).

    No. of levels

    Compressionfactor

    No. of iterations

    Mean(m s 1 )

    Median(m s 1 )

    Rmse(m s 1 )

    Std dev(m s 1 )

    25thpercentile

    (m s 1 )

    75thpercentile

    (m s 1 )

    10thpercentile

    (m s 1 )

    90thpercentile

    (m s 1 )

    816

    816

    0.10.10.50.5

    20202020

    0.720.720.700.71

    0.30.30.30.3

    2.662.662.692.67

    2.562.562.602.58

    0.800.800.800.80

    1.901.901.901.90

    2.002.002.102.00

    4.104.104.104.10

    16161616

    0.10.10.10.2

    1103025

    0.740.690.670.71

    0.20.30.20.3

    2.692.652.652.67

    2.592.562.562.57

    0.700.800.900.80

    1.901.901.801.90

    2.002.102.102.00

    4.204.104.004.10

    TABLE 5. Wind speed estimation performance using seven observation sites (14 305 data pairs). Boldface indicates best performance,italic worst (percentiles combinedsee text).

    No. of levels

    Compressionfactor

    No. of iterations

    Mean(m s 1 )

    Median(m s 1

    Rmse(m s 1 )

    Std dev(m s 1 )

    25thpercentile

    (m s 1 )

    75thpercentile

    (m s 1 )

    10thpercentile

    (m s 1 )

    90thpercentile

    (m s 1 )

    816

    816

    0.10.10.50.5

    20202020

    0.510.500.490.50

    0.20.10.20.1

    2.522.522.542.53

    2.462.472.492.48

    0.900.900.900.90

    1.601.601.601.60

    2.202.202.202.20

    3.703.703.703.70

    16161616

    0.10.10.10.2

    1103025

    0.520.480.470.50

    0.10.10.10.1

    2.56 2.522.532.52

    2.502.472.482.47

    0.800.901.000.90

    1.601.601.601.60

    2.202.202.202.20

    3.803.703.703.70

    can be lifted by the local wind speed V 0 against the localpotential temperature gradient d / dz:

    1/2

    g d Z z V , (1)max 0 0 T dz

    where T is the average temperature in the layer between z0 and Z max , and g is the gravitational acceleration.

    Equation (1) is used to dene coordinate surfaces thatapproximate the shape of the ow, intersecting the ter-rain in areas where the ow cannot pass over it. Windsare set to zero at points on the ow surface that arebelow the local terrain. Adjustments toward nondiverg-ence then cause the ow to pass around the intersectedobstacles. Use of surfaces dened by Eq. (1) not onlyincorporates the critical dividing streamline effects intothe analysis but also reduces the three-dimensionalprob-lem to several, more easily solved, two-dimensionalproblems.

    The steps involved in the overall wind calculationprocess are as follows.

    1) Find the lowest terrain.2) Dene ow surface heights over low terrain.3) Specify a set of terrain-following surfaces that have

    the dened heights over the low terrain.4) Interpolate observed winds to grid points on these

    terrain-following surfaces.5) Use Eq. (1) and the observed temperature and wind

    proles to estimate the maximum height to whichthe air can rise from its height over the low terrain.

    6) Impose constraints on the maximum rise so thateach layer does not rise more than the next lowerlayer or approach too close to the next higher layer.

    7) Dene ow surfaces so that the maximum rise isover the highest terrain and is proportional to el-evation (relative to the lowest terrain) elsewhere.

    8) Reinterpolate winds to the new critical streamlineow surfaces.

    9) Assign zero winds where surfaces intersect the ter-rain.

    10) Adjust the ow on the surfaces toward two-dimen-sional nondivergence using Endlichs (1967) iter-ative method; this adjustment will force owaround terrain obstacles.

    11) Interpolate the winds on the ow surfaces back toconstant altitude surfaces or to terrain-followingsurfaces.

    All the model calculations described here were madeon a 108 123 grid that covers the area enclosed by

    the rectangle shown in Fig. 1. The horizontal grid spac-ing was 1 km. In step 11 above, the results were inter-polated to anemometer height (10 m). Observationswere compared with the values calculated at the nearestgrid point. No attempt was made to interpolate to theexact observation site location, because this degree of precision was not considered necessary with the rela-tively ne, 1-km grid that was used. Specic examplesof San Francisco Bay Area wind analyses could be ob-tained from the Internet archives at the USGS WesternRegion Ofce (http://sfports.wr.usgs.gov/wind/).

  • 8/7/2019 JAM WocsswSinton

    7/14

    MARCH 2000 341L U D W I G A N D S I N T O N

    TABLE 6. Wind speed estimation performance using 11 observation sites (11 271 data pairs). Boldface indicates best performance, italicsworst (percentiles combinedsee text).

    No. of levels

    Compressionfactor

    No. of iterations

    Mean(m s 1 )

    Median(m s 1 )

    Rmse(m s 1 )

    Std dev(m s 1 )

    25thpercentile

    (m s 1 )

    75thpercentile

    (m s 1 )

    10thpercentile

    (m s 1 )

    90thpercentile

    (m s 1 )

    816

    816

    0.10.10.50.5

    20202020

    0.450.430.430.45

    0.10.10.10.1

    2.272.272.312.27

    2.232.232.27 2.23

    0.600.700.700.70

    1.301.301.301.30

    1.801.901.901.90

    3.203.203.203.20

    16161616

    0.10.10.10.2

    1103025

    0.490.440.430.46

    0.10.10.10.1

    2.282.262.282.27

    2.222.222.242.23

    0.600.700.700.70

    1.301.301.301.30

    1.901.901.901.84

    3.303.203.243.20

    TABLE 7. Wind direction estimation performance using four observation sites (12 835 data pairs). Boldface indicates best performance,italics worst (percentiles combinedsee text). All statistics are in compass degrees.

    No. of levels

    Compressionfactor

    No. of iterations Mean Median Rmse Std dev

    25thpercentile

    75thpercentile

    10thpercentile

    90thpercentile

    816

    816

    0.10.10.50.5

    20202020

    3.24.33.03.9

    1.31.71.21.7

    60.661.060.760.8

    60.560.860.660.7

    32.333.631.933.1

    25.624.025.924.3

    78.479.878.379.3

    67.968.268.568.2

    16161616

    0.10.10.10.2

    1103025

    4.65.04.54.1

    1.11.41.91.9

    61.360.660.660.9

    61.160.460.560.8

    35.134.834.233.3

    23.6 23.323.824.1

    81.880.879.879.6

    69.366.166.968.0

    4. Performance evaluation

    There are several factors that we believed might affectthe degree to which the model results agree with theobservations. For this model, or any other diagnosticmodel, the number of inputs used will be very important.That fact is why the model was tested using three net-works with different numbers of stations.

    Two other characteristics of the model that might af-fect performance are the number and heights of the sur-faces dened at step 3 of the above list. If there are toofew levels or they are too high, then they will pass overthe terrain without intersecting it when there is an in-version. We chose to run the model using 8 and 16levels. As noted above, those levels are dened over thelowest terrain, the sea level areas in this case. Theheights used for calculations with eight surfaces were0, 150, 300, 450, 600, 900, 1200, and 1500 m. Therange of heights covers the extent of the terrain in theSan Francisco Bay Area and usually will encompassany stable layers aloft that might affect the ow. The150-m separation at the lower heights was felt to benecessary to represent effects from the smaller terrainfeatures. The separation was halved to see if results wereimproved. These 16-level runs added surfaces at thefollowing heights above sea level: 75, 225, 375, 525,675, 750, 1050, and 1350 m. The 0-m level is not usedfor the calculations but rather to store values interpo-lated to a height of 10 m above the local terrain.

    Another factor that has the potential to alter the resultsis the number of iterations specied when adjusting the

    winds toward nondivergence. If no iterations are used,then the analysis essentially represents what is obtainedfrom the initial interpolation to the ow surfaces. Moreiterations will bring the wind eld closer to two-di-mensional nondivergence and should result in greater

    terrain inuence under stable conditions. Most of thetests that are described used 20 iterations, but the effectsof 1, 10, 25, and 30 iterations also were tested.

    The nal model characteristic that was tested was thedegree to which inversions aloft are allowed to affectow in the layers below the inversion. This character-istic was thought to be an important factor in a regionof complex terrain such as the San Francisco Bay Areawhere a strong, elevated summer inversion often capsa near-neutral marine boundary layer. The model con-tains a constant compression factor that can be spec-ied to dene the maximum height to which the air isallowed to rise in the layers below an inversion. Figure2 shows the difference between ow surface shapes forcompression factors of 0.1 and 0.5 with the same tem-perature prole (shown schematically at the right of Fig.2). The compression factor denes the maximum frac-tion by which the initial separation between two surfacescan be reduced in a less-stable lower layer. The lowerthe value (which must be between zero and one), themore nearly horizontal the ow and the more likely itis to intersect the terrain. Hence, lower values of thisfactor should result in greater interaction of the owwith the terrain, as can be seen in the gure.

    The values of various model performance measures

  • 8/7/2019 JAM WocsswSinton

    8/14

    342 VOLUME 39J O U R N A L O F A P P L I E D M E T E O R O L O G Y

    TABLE 8. Wind direction estimation performance using seven observation sites (12 233 data pairs). Boldface indicates best performance,italics worst (percentiles combinedsee text). All statistics are in compass degrees.

    No. of levels

    Compressionfactor

    No. of iterations Mean Median Rmse Std dev

    25thpercentile

    75thpercentile

    10thpercentile

    90thpercentile

    816

    816

    0.10.10.50.5

    20202020

    4.64.04.84.2

    1.82.01.82.0

    60.160.060.260.0

    59.959.960.059.8

    20.621.820.821.5

    33.432.433.832.6

    65.566.465.166.1

    76.176.076.676.2

    16161616

    0.10.10.10.2

    1103025

    3.03.63.54.1

    1.11.82.02.0

    60.259.559.960.1

    60.159.459.859.9

    22.222.122.821.8

    31.331.431.832.5

    69.367.467.666.3

    75.275.075.276.2

    TABLE 9. Wind direction estimation performance using 11 observation sites (9586 data pairs). Boldface indicates best performances,italics worst (percentiles combinedsee text). All statistics are in compass degrees.

    No. of levels

    Compressionfactor

    No. of iterations Mean Median Rmse Std dev

    25thpercentile

    75thpercentile

    10thpercentile

    90thpercentile

    816

    816

    0.10.10.50.5

    20202020

    2.22.02.22.4

    1.41.31.41.3

    56.056.156.056.4

    56.056.156.056.3

    17.819.418.017.9

    26.825.927.127.2

    61.062.561.562.3

    62.764.462.464.1

    16161616

    0.10.10.10.2

    1103025

    2.51.91.82.0

    0.51.01.31.4

    56.055.756.456.5

    56.055.756.456.5

    16.718.220.018.8

    24.725.426.327.0

    60.861.363.562.8

    65.063.564.7 64.7

    are tabulated in the next section for all the model con-gurations. A separate table is given for speed and di-rection differences (observed modeled) and for eachof the three different input datasets. Note that directiondifferences must be between 180 and 180 . Thecounts for direction differences are smaller than those

    for speeds because it was not possible to compare di-rections when the observed wind was reported as calm.Although winds were calculated at each grid point, onlythe winds at the grid points nearest the observation sitesfor which data were available (including those used asinputs) actually were tabulated for analysis. The per-formance measures (all based on the difference pairs)are the following.

    1) The mean of the differences between observed andmodeled values (bias); this statistic is a measure of the systematic error of the method; it will be positivewhen the model has a tendency to underestimate theobserved values and negative for overestimation.

    2) The median; this statistic is the value that dividesthe dataset in half; half the differences are smallerand half are larger than the median. It provides an-other measure of systematic errors in the method andwill be positive when the method underestimates ob-servations more frequently than it overestimatesthem.

    3) The root-mean-square error (rmse) is the square rootof the mean of the squares of the differences andprovides a measure of the likely error.

    4) The standard deviation of the observed minus mod-eled values very closely approximates the rmse in

    the examples given below because the bias is usuallyvery near zero.

    5) The 25th and 75th percentiles (lower and upper quar-tiles) are analogous to the median. They dene thevalues below which and above which 25% of thedifferences between observed and modeled values

    are found; half the estimation errors lie between thetwo values.6) The 10th and 90th percentiles (lower and upper dec-

    iles) are the values below and above which 10% of the differences occur. They dene a range about themedian that includes 80% of all the estimation errors.

    The above statistical measures have been applied usingcommercial statistical software (Velleman 1997). All of the measures except the standard deviation and rmsecan be either positive or negative because they relate tosimple differences between observed and calculated val-ues. They will be positive when observations have beenunderestimated and negative when observations areoverestimated.

    Radiosonde observations were used to dene windspeed and direction at 4000 ft ( 1200 m) MSL and todene the height of the base of any inversion that wasfound below that height. The strongest measured in-version lapse rate also was recorded for the layer. Thesevalues were used to stratify the results to see whethermodel performance was appreciably better under someconditions than under others. Inversion height andstrength were chosen for this purpose, because theWOCSS methodology is supposed to account for in-version effects on the response of the ow to the terrain.

  • 8/7/2019 JAM WocsswSinton

    9/14

    MARCH 2000 343L U D W I G A N D S I N T O N

    FIG . 3. Variability of mean (top) and rms (bottom) wind speed differences by meteorologicalcategory (11 stations, 16 ow surfaces, 20 iterations, and a compression factor of 0.1). Horizontallines in box indicate, from the bottom, the 25th, 50th, and 75th percentiles, and the actual dif-ferences are indicated by .

    We also felt that the analysis scheme might perform

    better for some synoptic-scale ows than for others. Totest these ideas, the data were stratied according to theheight and intensity of the inversion and the large-scalewind speed and direction, as represented by the windat 4000 ft. Table 2 shows the different categories intowhich the results were stratied for purposes of analysis.

    The number of categories used to stratify the data hadto be limited to provide enough cases for analysis ineach category. The three wind direction categories (east,northwest, and southwest) represent offshore and on-shore ow, with onshore ow subdivided into northwestand southwest to test for effects caused by the orien-tation of some of the terrain features. Two categorieseach were used for wind speed, inversion base height,

    and inversion intensity. The limits were chosen to pro-vide reasonable numbers of cases in each category. Thenumbers of cases in each category are shown in the lastcolumns of Table 2 for the different input datasets. Per-formance measures were calculated for each site for allthe categories listed in Table 2 except for categories forwhich there were fewer than 10 cases.

    Table 3 shows the joint occurrences of the differentmeteorological categories in the 4-station dataset. It canbe seen that the different categories are not entirelyindependent. For example, the cases with either no in-

    version or high weak inversions (types 10 and 13) are

    about ve times more frequent in the afternoon (0300UTC) than in the morning (1500 UTC). Therefore, types10 and 13 will tend to include many cases in commonwith type 2. Similarly, low, strong inversions (type 12)occur almost four times as often at 1500 UTC as at 0300UTC, so there will be many cases common to categories3 and 12.

    The WOCSS objective analysis scheme is very ef-cient and does not require a large amount of computertime, so it is possible to use the twice-daily observationsfor the entire year of 1996 and to run the model re-peatedly, changing the various parameters discussedabove. Obviously, the congurations using more itera-tions or more vertical levels take longer to run than the

    others. Typically, the model can be run overnight on aPower Macintosh 6100/66 PC for the 1243 differenthours represented by the 4-, 7-, and 11-station inputdatasets.

    5. Results

    Tables 4, 5, and 6 show the performance of the modelin estimating wind speed when it uses input data from4, 7, and 11 stations, respectively. The most obviousthing in the tables is that the variations in the model

  • 8/7/2019 JAM WocsswSinton

    10/14

    344 VOLUME 39J O U R N A L O F A P P L I E D M E T E O R O L O G Y

    FIG . 4. Same as Fig. 3 but for wind direction.

    conguration do not produce very large differences inthe overall performance. The best performances for thevarious measures are shown in bold type, while theworst are in italics. The decile and quartile measureshave been combined, so that the sum of the two ranges(75th percentile25th percentile 90th percentile10thpercentile) is used as the performance measure. Whenthese ranges are small, there are more model estimatesclose to the observed value. In a similar way, becauseof the near-zero means, the close agreement betweenthe rmse and the standard deviation is apparent.

    The percentile measures show that there is a pro-nounced asymmetry in the estimation errors. Underes-timates of speed are more common than overestimatesand tend to be of greater magnitude. In essence, theresults suggest that the model tends to miss extremevalues, which, for wind speed, must be maxima becausewind speed cannot be less than zero.

    The means and medians nearest zero represent theleast-biased estimates of wind speed. Except for the4-station means, the values of these two measures areall within the reporting precision of the observations(generally about 0.5 m s 1 ). There is virtually no dif-ference between the best and worst speed estimates byany of the measures when the same number of inputobservations are used. However, as expected, the per-

    formance does improve noticeably when more inputdata are provided.

    Tables 7, 8, and 9 provide summaries of the perfor-mance measures for wind direction using 4, 7, or 11stations as input. As is true of the speeds, increasingthe number of stations used as input results in substantialimprovement in the models ability to estimate winddirection. Otherwise, there is little change in model per-formance that results from changing the compressionfactor or number of ow surfaces used. For the smallerinput datasets, increasing the number of iterations hasa tendency to improve performance slightly until about10 or 20 iterations are reached, then there is a slightdegradation above those numbers. When interpreting theresults, it should be remembered that many of the di-rections are reported to either the nearest 10 or 22.5segment. Thus, the mean differences (bias) shown inthe tables are generally within the accuracy of the orig-inal data. Most of the rmse are about 60 or less. Table9 shows that half the estimates are within about one 16-point direction segment of the observed value.

    Figures 3 and 4 summarize the mean errors (bias) andrmse of wind speed and direction, respectively, for themeteorological categories dened in Table 2. The smallmarks ( ) show the values for the 39 stations used forevaluating the model. Some values are the same, so an

  • 8/7/2019 JAM WocsswSinton

    11/14

    MARCH 2000 345L U D W I G A N D S I N T O N

    FIG . 5. Overall distribution of wind model performance in the San Francisco Bay Area (11stations, 16 ow surfaces, 20 iterations, and a compression factor of 0.1).

    individual mark may represent more than one station.The results shown in these gures are from model runswith 11 stations as input, 16 ow surfaces, a compres-sion factor of 0.1, and 20 iterations. For some meteo-rological categories, not all stations had enough casesto calculate the statistics. The tops and bottoms of theboxes in the gures mark the upper and lower quartilesfor the station values, and the horizontal line within thebox shows the median. The medians of the mean wind

    direction or speed differences (biases) do not vary muchby meteorological category, as can be seen in the twogures. The same is true for the speed rmse values. Inall cases, at least half the stations had a speed rmse of less than 2 m s 1 , and 75% of less than about 2.5 ms 1 .

    There were appreciable differences in wind directionrmse from one meteorological category to another. Themedian of this parameter varied from about 40 for themorning cases to about 70 for the afternoon. Whenupper-level winds were from the southwest quadrant at

    less than 5 m s 1 , the median directional rmse was about50 ; 75% of the stations had directional rmse less than70 . By contrast, these values rose to about 70 and 85when there was a strong inversion [ 3 C (100 m) 1 ]with a base below 200 m.

    The results shown in Figs. 3 and 4 represent the dis-tribution of the four different statistics for the 39 stationsused to evaluate the model. The statistics shown aremean difference (bias) between observed and calculated

    speeds and directions and the rmse values for speed anddirection. The same results also can be mapped to showthe geographical variations of model performance, aswas done in Figs. 5, 6, and 7. Figure 5 represents theunstratied data. Figures 6 and 7 show the results forthe two meteorological categories for which the modelwas best and least able to dene the wind direction.Figure 6 is for the cases when winds were from thesouthwest quadrant at less than 5 m s 1 . Figure 7 showsthe worst category, when there was a strong inversion[ 3 C (100 m) 1 ] with a base below 200 m.

  • 8/7/2019 JAM WocsswSinton

    12/14

    346 VOLUME 39J O U R N A L O F A P P L I E D M E T E O R O L O G Y

    FIG . 6. Distribution of wind model performance in the San Francisco Bay Area with weak ( 5 m s 1 ) southwest quadrant winds at 4000 ft (11 stations, 16 ow surfaces, 20 iterations,and a compression factor of 0.1).

    6. Conclusions

    The results presented here have shown several things.One of the most important is the demonstration that itis possible to perform a comprehensive model evalua-tion using routinely collected data, although it does re-quire the consolidation of data from more than onesource. The results also demonstrate that it does notrequire massive, expensive computing facilities to ob-

    tain large numbers of model outputs that can be com-pared with the observations.It is important, however, to note that the results pre-

    sented here are for surface estimates and observationsonly. Model evaluation through the depth of the bound-ary layer awaits more comprehensive measurementsaloft. The more widespread deployment of wind pro-lers and other remote sensing instrumentation maysolve this problem in the future. There are other inter-esting research topics that also might be pursued in thefuture, such as comparison of the surface and upper-

    level outputs from simple objective analysis tools likeWOCSS with those from more comprehensive meso-scale dynamic analysis models (e.g., Parsons and Du-dhia 1997). Another potentially useful topic is the de-gree to which simple objective analysis can be used toprovide long-term local detail, using coarse-grid infor-mation such as that being generated by the NationalCenters for Environmental PredictionNational Centerfor Atmospheric Research reanalysis project (Kalnay etal. 1996).

    More specically, the results have shown that theWOCSS approach to diagnostic wind modeling is ef-cient and robust. It may be that it is too robust, judgingby the lack of sensitivity to changes in its conguration.The fact that the model does not perform better for thestrong inversions and low wind speeds for which it pri-marily was developed was disappointing. Those con-ditions are, however, by far the most difcult for mod-eling winds, and the fact that the performance was not

  • 8/7/2019 JAM WocsswSinton

    13/14

    MARCH 2000 347L U D W I G A N D S I N T O N

    FIG . 7. Distribution of wind model performance in the San Francisco Bay Area when there isa strong inversion [ 3 C (100 m) 1 ] with a base below 200 m (11 stations, 16 ow surfaces,20 iterations, and a compression factor of 0.1).

    degraded badly under these most difcult conditions isencouraging.

    The usual assumption that diagnostic models performbetter with more input data has been conrmed, and theextent was quantied for the model used. Overall, themodel has little bias in wind direction, generally 15 orless. The rmse in the calculations is less than 45 overmost of the modeled region. The distribution of the

    directional rmse shown in Fig. 5c shows that much of the poor performance is found around La Honda (seeFig. 1), in the mountains in the southeast part of thedomain, and in a wide swath to the east of San Francisco.La Honda and the other stations in mountainous areasmay well be not very representative of the ow. TheAltamont Pass area east of Livermore is of particularinterest because of the many wind-energy sites in theregion. However, currently no data are archived fromthis region, so an accurate evaluation of how well themodel is performing there was unable to be obtained.

    We hope to investigate some of these problems in thefuture.

    One of the high rmse regions in Fig. 5c is where windoften passes through gaps in the coastal hills, movesacross the bay, and then splits into two ows in oppositedirections. One part of the ow proceeds down the baytoward the southeast. The other arm of the ow tendsto move north and then turn eastward through the low-

    level San Joaquin delta region. The model often missesthe location of the split in the ow. When this failurehappens, the difference between observed and calcu-lated directions may be as much as 180 . The precisespecication of this feature will require more input datafrom the region where it occurs.

    Acknowledgments. The authors want to acknowledgethe essential contributions of Allen Becker of San JoseState University, who helped develop much of the soft-ware used to collect the data that were used. The UCAR-

  • 8/7/2019 JAM WocsswSinton

    14/14

    348 VOLUME 39J O U R N A L O F A P P L I E D M E T E O R O L O G Y

    sponsored Cooperative Meteorological Education andTraining Outreach program supported the effort to makethe diagnostic model operate in real time. An FAA grantto the MIT Lincoln Laboratory provided the San JoseState University Foundation (Subcontract 21-1505-6843) with the equipment used to collect, parse, anddistribute the data input les. The archiving has beendone by Ralph Cheng, assisted by Jonathan Feinsteinand John Cate, using equipment at the USGS MenloPark, California, facilities. Walt Strach of the NWS Cen-ter Weather Service Unit in Fremont, California, hasbeen very cooperative in making the parsed data avail-able. The development of the WOCSS model has beensupported over the years by the Army Research Ofceand the Army Atmospheric Sciences Laboratory. We alsoare very grateful for all the data that have been collectedand supplied by the agencies listed in Table 1.

    REFERENCES

    Baskett, R. L., R. L. Lee, W. A. Nuss, R. D. Bornstein, D. W. Reyn-olds, T. Umeda, and F. L. Ludwig, 1998: The Bay Area MesonetInitiative (BAMI): A cooperative effort to develop and operatea real-time mesoscale network in the greater San Francisco andMonterey Bay Areas. Preprints, Second Conf. on Coastal At-mospheric and Oceanic Prediction and Processes, Phoenix, AZ,Amer. Meteor. Soc., J30J35.

    Bhumralkar, C. M., R. L Mancuso, F. L. Ludwig, and D. S. Renne,1980: A practical and economic method for estimating windcharacteristics at potential wind energy conversion sites. Sol. Energy, 25, 5565.

    Bridger, A. F. C., A. J. Becker, F. L. Ludwig, and R. M. Endlich,1994: Evaluation of the WOCSS wind analysis scheme for theSan Francisco Bay Area. J. Appl. Meteor., 33, 12101218.

    Endlich, R. M., 1967: An iterative method for altering the kinematicproperties of wind elds. J. Appl. Meteor., 6, 837844., 1984: Wind energy estimates by use of a diagnostic model. Bound.-Layer Meteor., 30, 375386.

    Kalnay, E., and Coauthors, 1996: The NCEP/NCAR 40-Year Re-analysis Project. Bull. Amer. Meteor. Soc., 77, 437471.

    Ludwig, F. L., J. M. Livingston, and R. M. Endlich, 1991: Use of

    mass conservation and critical dividing streamline concepts forefcient objective analysis of winds in complex terrain. J. Appl. Meteor., 30, 14901499., R. T. Cheng, J. Feinstein, D. M. Sinton, and A. Becker, 1997:An on-line diagnostic wind model applied to the San FranciscoBay region. Preprints, 13th Int. Conf. on Interactive Informationand Processing Systems (IIIPS) for Meteorology, Oceanography,and Hydrology, Long Beach, CA, Amer. Meteor. Soc., 344347.

    McNider, R. T., K. E. Johnson, and R. W. Arritt, 1984: Transferabilityof critical dividing streamline models to larger scale terrain.Preprints, Fourth Joint Conf. on Applications of Air Pollution Meteorology, Portland, OR, Amer. Meteor. Soc., J25J27.

    Parsons, D. B., and J. Dudhia, 1997: Observing system simulationexperiments and objective analysis tests in support of the goalsof the Atmospheric Radiation Measurement Program. Mon. Wea. Rev., 125, 23532381.

    Sheppard, P. A., 1956: Air ow over mountains. Quart. J. Roy. Me-

    teor. Soc., 82, 528529.Sherman, C. A., 1978: A mass-consistent model for wind elds overcomplex terrain. J. Appl. Meteor., 17, 312319.

    Strach, W., F. L. Ludwig, D. Sinton, and A. Becker, 1997: Applicationsof a diagnostic wind model to stratus forecasting for aircraftoperations in the San Francisco Bay region. Preprints, SeventhConf. on Aviation Weather Systems, Long Beach, CA, Amer.Meteor. Soc., J29J32.

    Thykier-Nielsen, S., T. Mikkelsen, R. Kamada, and S. A. Drake, 1990:Wind ow model study for complex terrain. Preprints, NinthSymp. on Turbulence and Diffusion, Copenhagen, Denmark,Amer. Meteor. Soc., 421424.

    Tucker, D. F., 1997: Surface mesonets of the western United States. Bull. Amer. Meteor. Soc., 78, 14851495.

    Velleman, P. F., 1997: Data Desk, Version 6.0, Handbook 2. DataDescription, Inc., 346 pp.