Solar radiation is the ultimate source of energy flowing through the atmosphere; it fuels all atmospheric motions. The visible-wavelength range of solar radiation represents a significant contribution to the earth's energy budget, and visible light is a vital indicator for the composition and thermodynamic processes of the atmosphere from the smallest weather scales to the largest climate scales. The accurate and fast description of light propagation in the atmosphere and its lower-boundary environment is therefore of critical importance for the simulation and prediction of weather and climate.
Simulated Weather Imagery (SWIm) is a new, fast, and physically based visible-wavelength three-dimensional radiative transfer model. Given the location and
intensity of the sources of light (natural or artificial) and the
composition (e.g., clear or turbid air with aerosols, liquid or ice clouds, precipitating rain, snow, and ice hydrometeors) of the atmosphere, it
describes the propagation of light and produces visually and physically
realistic hemispheric or 360
Applications of SWIm include the visualization of atmospheric and land surface conditions simulated or forecast by numerical weather or climate analysis and prediction systems for either scientific or lay audiences. Simulated SWIm imagery can also be generated for and compared with observed camera images to (i) assess the fidelity and (ii) improve the performance of numerical atmospheric and land surface models. Through the use of the latter in a data assimilation scheme, it can also (iii) improve the estimate of the state of atmospheric and land surface initial conditions for situational awareness and numerical weather prediction forecast initialization purposes.
Numerical weather prediction (NWP) modeling is a maturing technology for the monitoring and prediction of weather and climate conditions on a wide continuum of timescales (e.g., Kalnay, 2003). In NWP models, the large-scale variability of the atmosphere is represented via carefully chosen and geographically and systematically laid-out prognostic variables such as vertically stacked latitude–longitude grids of surface pressure, temperature, wind, humidity, suspended (clouds) and falling (precipitating) hydrometeors, and aerosol. Using differential equations, NWP models capture temporal relationships among the atmospheric variables, allowing for the projection of the state of the atmosphere into the future. Short-range NWP forecasts (called “first guesses”) can then be combined with the latest observations of atmospheric conditions to estimate the instantaneous weather conditions at any point in time (called the “analyzed state”, “analysis”, or “forecast initial condition”) using data assimilation (DA) methods (e.g., Kalnay, 2003).
The initialization of forecasts (and thus DA) plays a critical role in NWP as the more complete the information the analysis state has about the atmosphere, the longer pursuant forecasts will retain skill (e.g., Toth and Buizza, 2019), hence the desire for DA to exploit as many observations – and from as diverse a set of instruments – as possible. Some observations are in the form of model variables, in which case, after temporal and/or spatial interpolations, they can be directly combined with a model first guess (i.e., “direct” measurements or observations). Many other instruments, however, observe quantities that are different but related to the model variables (i.e., “indirect” measurements).
Indirect observations in the form of visible-wavelength light intensity such as those from high-resolution (down to 30 s time frequency and 500 m pixels) imagers aboard a family of geostationary satellites (e.g., Himawari Advanced Himawari Imager, AHI; GOES-R Advanced Baseline Imager, ABI; Schmit et al., 2017) and from airborne or ground-based cameras offer unique opportunities. First, unlike most other observations, light intensity is readily convertible to color imagery, offering a visual representation of the environment to both specialized (researchers or forecasters) and lay (the general public) users. Note that visual perception is by far humans' most informative sense. Secondly, high-resolution color imagery provides a unique window into fine-scale land surface, aerosol, and cloud processes that are critical for both the monitoring and nowcasting of convective and other severe weather events as well as for the assessment and refinement of modeled energy balance relationships that are crucial for climate forecasting. Information on related processes derivable from other currently available types of observations is limited in spatiotemporal and other aspects compared to color imagery.
Physically, color imagery is a visual representation of the intensity of different-wavelength light (i.e., spectral radiance) reaching a selected point (i.e., location of a photographic or imaging instrument) from an array of directions, determined by the design of the instrument, at a given time. For computational efficiency, radiative processes are vastly simplified in NWP models and typically resolve (sun to atmospheric or land surface grid point) only how solar insolation affects the temperature conditions in the atmosphere and on the land surface in a one-dimensional manner.
Color imagery clearly reflects (no pun intended) the geographical distribution and physical characteristics of cloud, aerosol, and land surface conditions in the natural environment. Some of the quantities used in NWP models to represent such conditions include the amount of moisture, various forms of cloud-forming and falling hydrometeors, the amount and type of aerosols, the amount and type of vegetation and snow cover on the ground, and water surface wave characteristics. Light processes recorded in color imagery constitute indirect measurements of such natural process that must be quantitatively connected with NWP model prognostic variables before their possible use in the initialization of NWP models.
In the assimilation of direct observations, the value of model variables in the first guess is adjusted toward that of observations (based on the expected level of error in each; e.g., Kalnay, 2003). In the first step of assimilating indirect observations, simple models (called “forward” models or operators) are used to create “synthetic” observations based on model variables. Synthetic observations simulate what measurements we would get had instruments been placed in a world consistent with the abstract conditions of an NWP first-guess forecast. The model-based synthetic observations can then be compared with real-world measurements of the same (nonmodel) quantities. Utilizing an adjoint or ensemble-based inverse of the forward operator or another minimization procedure, the first-guess forecast variables are then adjusted to minimize the difference between the simulated and real observations. In case of visible light measurements, observations can be considered to be in the form of color (or multispectral visible) imagery.
Beyond their expanding use in DA applications, the simulation of color imagery from model variables via forward operators has another important purpose: the visualization of 4D NWP analysis and forecast fields. Visualization renders the complex NWP data laid out in three dimensions in space (and one across model variables) readily perceptible by both expert and lay audiences, facilitating a unique validation and communication of analysis and forecast information.
This study is intended to introduce the Simulated Weather Imagery model (SWIm) and describe what has been done so far as well as to suggest a roadmap for the future. Section 2 is a brief review of the general properties and limitations of currently available multispectral-radiance and color-imagery forward operators. The main contribution of this paper is the introduction of SWIm, a recently developed fast color-imagery forward (or color visible-radiation transfer) model (Sect. 3). Section 4 explores two application areas for SWIm: the visualization and validation of NWP analysis and forecast fields and the assimilation of color-imagery observations into NWP analysis fields. Closing remarks and some discussion are offered in Sect. 5.
Light observations used in multispectral visible imagery are affected by three main factors: (1) the light source (its location and intensity across the visible spectrum), (2) the medium through which the light travels (the composition and density of its constituents in 3D space), and (3) the location where the light is observed or perceived (Fig. 1). Conceptually, the modeling of how light from a given source propagates through a medium and affects an instrument or receptor involves a realistic (a) relative placement of the light source, medium, and receptor with respect to each other; (b) representation of light emission from its source; (c) description of the medium (from an NWP analysis of the atmosphere and its surroundings); (d) simulation of how light is modified as it travels through the medium via absorptive and diffusive processes; and (e) simulation of the response of the instrument or human observer to the natural stimuli. Full, end-to-end color-imagery forward modeling involves the specification of (a) and (b), an estimation of (c), the simulation of processes described in (d; “ray tracing”), and the consideration of the impact of radiation (e).
Overview of functionality in a sampling of radiative transfer packages.
General ray tracing procedure showing forward light rays (yellow)
coming from the light source. A second set of light rays (pink) is traced
backward from the observer. The forward and backward optical thicknesses
(
Light propagation has been extensively studied from both experimental and theoretical perspectives. The most scientifically rigorous treatment involves the study of how individual photons are affected by, and a stochastic analysis of, the expected or net effect of scattering and absorption. Named after the stochastic concept involved, this line of inquiry and the related methodology are referred to as the “Monte Carlo” approach. As noted in Table 1, a Monte Carlo approach (e.g., Mayer, 2009) works in a wide variety of situations with a wide array of 3D atmospheric fields, arbitrary vantage points, and day and night applications. The Monte Carlo is the only listed package the authors have seen that produces similar images with visually realistic colors as seen from the ground. Table 1 also lists the characteristics of some other widely used radiative transfer models. Whereas the Monte Carlo model is physically more rigorous, it is computationally much more intensive than some of the other methods. The computational efficiency of the other methods comes at the cost of significant approximations or other limitations. For example, the rapid radiative transfer model (RRTM) provides irradiance at different grid levels and is used as a radiation parameterization package in NWP models. As is typical for such packages, the RRTM operates in single columns; hence it cannot produce the 3D directional imagery that the Monte Carlo approach can. The community radiative transfer model (CRTM; Kleespies et al., 2004) is used for both visualization and as a radiative forward operator in variational and related DA systems. The spherical discrete harmonics ordinate method (SHDOM; Evans, 1998; Doicu et al., 2013) is another sophisticated radiative transfer model often used in fine-scale research studies. The SHDOM can produce imagery with good physical accuracy.
Table 1 also lists the characteristics of SWIm, the recently developed method that the next section describes in some detail. SWIm was designed for the rapid production of color imagery under a wide range of conditions. To satisfy these requirements, approximations to the more rigorous treatment of some physical processes had to be made. The level of approximations was carefully chosen to improve computational efficiency without unnecessarily sacrificing accuracy. By considering human color-vision perception, SWIm produces images that are visually realistic. This feature is used in other visualizations (e.g., Klinger et al., 2017) that use the MYSTIC radiative transfer package (Mayer, 2009), though to our knowledge it is not always considered for image display in the operational meteorology community. The color calculation allows the simulated images to be directly compared with photographic color images since it can accurately convert spectral-radiance values into appropriate displayed RGB values on a computer monitor as described in Sect. 3.8. As discussed in the rest of this study, with these features, SWIm occupies a niche for the versatile visualization and validation of NWP analyses and forecasts as well as for the assimilation of color-imagery observations aimed at improved NWP initialization and nowcasting applications.
List of ray tracing steps used in SWIm. Steps 1a and 1b are illustrated in Fig. 1.
SWIm considers the sun and the moon (if it is sufficiently bright) as nearly point daytime and nighttime light sources. Information on the medium through which light travels is obtained from 3D NWP analysis and forecast hydrometeor and aerosol fields. To simulate the propagation of light, SWIm invokes an efficient simplified ray tracing approach that can be benchmarked against results from more sophisticated radiative transfer packages, including the Monte Carlo method. There are two main sets of rays that are traced for scattering and absorption calculations. The first is from the sun (forward direction; Step 1a in Table 2) and the second is from the observer (backward direction; Step 1b), making SWIm a forward–backward ray tracing procedure (see Fig. 1). These traces are calculated over the model grid for the gas, aerosol, and hydrometeor components. Since the actual atmosphere extends above and, if it is a limited-area model (LAM), also laterally outside the model grid, a separate, faster ray tracing step is done that considers just the gas and horizontally uniform aerosol components beyond the limited model domain. An algorithmic procedure then combines these results to arrive at the final radiance values and corresponding image display. The above steps are summarized in Table 2.
For gas and aerosols, we evaluate the optical depth
The top-of-atmosphere (TOA) solar irradiance
Time series of GHI values integrated from SWIm radiance images (red lines, vertical axis on left) compared with concurrent pyranometer observations in watts per square meter at NREL (green lines). The comparison spans a 4 h period on the morning of 12 August 2019. Simulated minus pyranometer GHI values are plotted as blue circles (vertical axis on right). Sky conditions were free of significant clouds, with aerosol optical depth < 0.1.
Once we calculate SWIm spectral-radiance values at each pixel, it is possible
to estimate the global horizontal irradiance (GHI) by first integrating
spectral radiance weighted by
In a worst-case scenario of a pure Rayleigh blue sky, we calculate that the
normalized spectrum integrated from 0.3 to 3.0
With its realistic 3D ray tracing, SWIm is able to simulate a number of daytime, twilight, and nighttime atmospheric light effects, including consideration of a spherical atmosphere. This involves various light sources including moonlight, city lights, airglow, and astronomical objects. These will be demonstrated in a separate paper.
To cover the full extent of atmosphere beyond the NWP model domain, a
“clear-sky” ray tracing (Step 2) is conducted on a coarser angular grid
compared with Step 1. The primary purpose of Step 2 is to provide a more
direct account of the radiance produced by Rayleigh single scattering. A
second purpose is to model the effect of aerosols that may extend beyond the
top of the model grid, specified via a 1D stratospheric variable. The
accuracy of radiative processes associated both with stratospheric aerosols
and twilight benefits from the vertical extent considered in this step all
the way up to about 100 km. To calculate the solar-relative spectral
radiance, the ray tracing algorithm integrates along each line of sight from
the observer as
As the light rays are traced through the model grid (yellow rays in Fig. 1,
Step 1a in Table 2), their attenuation and forward scattering are determined
by considering the optical thickness of intervening clouds and aerosols
along their paths. The optical thickness between each 3D grid point and the
light source
The effective radius is specified based on hydrometeor type and (for cloud
liquid and ice) CWC. For cloud liquid and cloud ice, larger values of
CWC translate to having larger
We can now write Eq. (8) for the scalar irradiance at the grid point, here
assuming the surface albedo to be 0:
Single-scattering phase functions used for cloud liquid, cloud ice, rain, and snow.
The single-scattering phase function has a sharp peak near the sun (i.e.,
forward scatter) that generally becomes stronger in magnitude for larger
hydrometeors. Cloud ice and snow also have sharper forward peaks than
liquid, particularly for pristine ice. A linear combination of
Henyey–Greenstein (HG) functions (Henyey and Greenstein, 1941) is employed
to specify the angularly dependent scattering behavior (phase function) for
each hydrometeor type, producing curves shown in Fig. 3. Linear
combinations employing several of these functions are used as a simple way
to reasonably fit the angular dependence produced by Mie scattering. If more
detailed size distributions (and particle shapes for ice) are available, a
more exact representation of Mie scattering can be considered through the
use of Legendre polynomial coefficients and a lookup table or through other
parameterizations (e.g., Key et al., 2002). Given the values of asymmetry
factor
When the optical thickness along the forward or backward paths approaches or exceeds unity, contributions to the observed signal from multiple-scattering events become too significant to approximate via single scattering. A rigorous though time-consuming approach such as Monte Carlo would consider each scattering event explicitly. Instead, here we use a more efficient approximation that arrives at a single-scattering phase function that approximates the bulk effect of the multiple-scattering events. Several terms that interpolate between optically thin and thick clouds are used as input for this parameterization as described below.
Thick clouds seen from near ground level can be either directly or
indirectly illuminated by the light source. As illustrated by the light rays
in Fig. 1, direct illumination corresponds to
Intermediate values of
Our neighboring planet Venus offers an astronomical example for the
radiative behavior of such a cloudy spherical surface. For the planet as a
whole, Venus has a well-established phase function
As a simple illustration for cases looking from above we consider a
homogeneous cloud of hydrometeors having optical thickness
For
In cases where
There are two general methods for working with aerosols in SWIm. The first uses a 1D specification of the aerosol field that runs somewhat faster than a 3D treatment. The second, newer approach considers the 3D aerosol distribution described in detail herein. Aerosols are specified by a chemistry model in the form of a 3D extinction coefficient field. Various optical properties are assigned based on the predominant type (species) of aerosols present in the model domain.
To determine the scattering phase function clouds and aerosols are
considered together, and aerosols are simply considered as another species of
hydrometeors. For a case of aerosols only, the phase function
Dust generally has a bimodal size distribution of relatively large
particles. Accounting for both the coarse- and fine-mode aerosols and for
fitting the forward-scattering peak, a linear combination of a pair of DHGs
(Eq. 11) can be set by substituting
The single-scattering albedo
Two cases showing the four fitted phase function parameters
In its current configuration, aerosol optical properties for the entire domain are assumed to be characterized by a single set of parameters in SWIm, reflecting the behavior of a predominant type or mixture of aerosols. The first row in Table 3 was arrived at semiempirically for relatively dusty days in Boulder, Colorado, by setting values of the parameters and comparing the appearance of the solar aureole and overall pattern of sky radiance between simulated and camera images as well as visual observations.
The cameras being used are not radiometrically calibrated, though we can
approximately adjust the camera color and contrast on the basis of the
Rayleigh scattering radiance distribution far from the sun on relatively
clear days. We are thus limited to looking principally at relative
brightness changes in a semiempirical manner. The cameras do not use
shadow bands and generally have saturation due to direct sunlight within a
These days feature a relatively condensed aureole around the sun indicative of a contribution by large dust particles to a bimodal aerosol size distribution. This type of distribution has often been observed in AERONET (Holben et al., 1998) retrievals. The single-scattering albedo is set with increased blue absorption as might be expected for dust containing a hematite component.
Simulated panoramic images with an aerosol optical depth of 0.1 using
The second case of mixed dust and pollution was derived from AERONET observations over Saudi Arabia, calculating the phase function using Mie scattering theory (Appendix A) then applying a curve fitting procedure to yield the four phase function parameters described previously. In this case the single-scattering albedo is spectrally independent. Simulated images for these two sets of phase function parameters are shown in Fig. 4.
As with meteorological clouds, when the aerosol optical thickness along the forward or backward ray paths (Fig. 1) approaches or exceeds unity, the contributions from multiple-scattering increase. In a manner similar to cloud multiple scattering, we utilize a more efficient approximation that determines a single-scattering phase function that is equivalent to the net effect of the multiple-scattering events.
Nonabsorbing aerosols seen from above can be treated in a similar manner to
cloud layers as described above (Eq. 9). We now extend this treatment to
address absorbing aerosols. SWIm was tested using 3D aerosol fields from two
chemistry models running at Colorado State University (CSU): the Regional
Atmospheric Modeling System (RAMS; Miller et al., 2019; Bukowski and van den Heever,
2020) and the Weather Research and Forecasting Model (WRF; Skamarock et al.,
2008). SWIm was also tested with two additional chemistry models: the High
Resolution Rapid Refresh (HRRR)-Smoke (Fig. 5; available at
Simulated image of an HRRR-Smoke forecast with a smoke plume from
the December 2017 California wildfires. The view is zoomed in from a
perspective point at
For partially absorbing aerosols such as smoke containing black carbon or
dust, in a thin layer we can multiply Eq. (6) by
The clear-sky radiance
NAVGEM view from space using aerosols only.
The perspective point is
When a backward-traced ray starting at the observer intersects the land surface, we consider the incident and reflected light upon the surface that contributes to the observed light intensity to be attenuated by the intervening gas, aerosol, and cloud elements. Terrain elevation data on the NWP model grid are used to help determine where light rays may intersect the terrain. The land spectral albedo is obtained at 500 m resolution using Blue Marble Next Generation imagery (BMNG; Stockli et al., 2005). The BMNG image RGB values are functionally related to spectral albedo for three Moderate Resolution Imaging Spectroradiometer (MODIS) visible-wavelength channels. A spectral interpolation is performed to translate the BMNG and MODIS albedos into the three reference wavelengths used in SWIm.
In situ panoramic view in the lower troposphere showing smoke
aerosols and hydrometeors. This is part of an animation simulating an
airplane landing at Denver International Airport. The panorama spans
SWIm-generated image for a hypothetical clear-sky case having an
aerosol optical depth of
For a higher-resolution display over the continental United States, an aerial photography dataset obtained from the United States Department of Agriculture (USDA) can also be used (Figs. 7, 8). The associated National Agriculture Imagery Program (NAIP) data are available at 70 cm resolution and are added to the visualization at subgrid scales with respect to the model Cartesian grid. This dataset is only roughly controlled for spectral albedo, though it can be a good trade-off with its very high spatial resolution.
To obtain the reflected surface radiance in each of the three reference
wavelengths, we utilize clear-sky estimates of direct and diffuse incident
solar irradiance. For the direct irradiance component, spectral albedo is
converted to reflectance using the anisotropic reflectance factor (ARF) that
depends on the viewing geometry and land surface type. Thus reflectance
Relatively simple analytical functions for ARF are used over land with maximum values in the backscattering direction. Modified values of surface albedo and ARF are used in the presence of snow or ice cover with maximum values in the forward-scattering direction. Similar to earlier work (Cox and Munk, 1954), a sun glint model with a fixed value of mean wave slope is used over water except that waves are given a random orientation without a preferred direction. Scattering from below the water surface is also considered. In the future, wave slope will be derived from NWP ocean wave and wind forecasts.
As explained earlier, spectral radiances are computed for three narrowband wavelengths using solar-relative intensity units to yield a scaled spectral reflectance. This allows some flexibility for outputting spectral radiances, spectral reflectance, or more visually realistic imagery that accounts for details in human color vision and computer monitor characteristics. To accomplish the latter it is necessary to estimate spectral radiance over the full visible spectrum using the partial information from the selected narrowband wavelengths we have so far. Having a full spectrum is important when computing an accurate human color vision response (Bell et al., 2006). The procedure is to first perform a polynomial interpolation and extrapolation of the three narrowband (solar-relative) reflectance values, then multiply this by the solar spectrum, yielding spectral radiance over the entire visible spectrum at each pixel location. The observed solar spectrum interpolated in 20 nm steps is used for purposes of subsequent numerical integration.
Digital RGB color images are created by calculating the image count values
with three additional steps:
Convolve the spectral radiance (produced by the step described in the
paragraph above) with the CIE tristimulus color matching response functions
to account for color perception under assumptions of normal human photopic
vision. Each pixel of the image now specifies the perceived color in the Apply a
Include a gamma (approximate power law) correction with a value of 2.2 to
match the nonlinear monitor brightness scaling. With this correction the
displayed image brightness will be directly proportional to the actual
brightness of a scene in nature, giving realistic contrast and avoiding
unrealistically saturated colors. With no correction, the contrast would be
incorrect and the brightness off by an exponential amount.
Based on an extensive subjective assessment, this procedure gives a
realistic color and contrast match if one looks at a laptop computer monitor
held next to a scene in a natural setting on the ground, and it is anticipated
to perform well for air- and space-based simulations as well. The results
have somewhat more subtle colors and contrast compared with many commonly
seen earth and sky images. The intent here is to make the brightness of the
displayed image proportional to the actual scene and for the perceived color to
be the same as a human observer would see in a natural setting. This is
without any exaggeration of color saturation that sometimes occurs in
satellite “natural-color” image rendering (e.g., Miller et al., 2012) and
even in everyday photography (based on the author's subjective observation; Albers, 2019). For
example, color saturation values of the sky in photography often exceed the
calculated values for even low-aerosol conditions. A more complete
consideration of the effects of atmospheric scattering and absorption in
SWIm image rendering softens the appearance of the underlying landscape when
viewed from space or otherwise afar. This is due to SWIm not suppressing the
contribution of Rayleigh scattering to radiance as observed in nature.
The fast 3D radiative transfer package called Simulated Weather Imagery has been developed to serve the development and application needs of high-resolution atmospheric modeling. Visually and physically realistic, full natural-color (e.g., Miller et al., 2012) SWIm imagery offers, for example, a holistic display of numerical model output (analyses and forecasts). At a glance one can see critical weather elements such as the fields of clouds, precipitation, aerosols, and land surface in a realistic and intuitive manner. Model results are thus more effectively communicated for interpretation, displaying weather phenomena that we see in the sky and the surrounding environment. NWP information about current and forecast weather is readily conveyed in an easily perceivable visual form to both scientific and lay audiences.
The SWIm package has run on a variety of NWP modeling systems including the Local Analysis and Prediction System (LAPS; Toth et al., 2014), WRF, RAMS, HRRR (Benjamin et al., 2016), and NAVGEM. We can thus discern general characteristics of the respective data assimilation and modeling systems including their handling of clouds, aerosols, and land surface (e.g., snow cover).
RAMS model simulation view from
Visualization of RAMS output was done for a case featuring dust storms over the Arabian Peninsula and the neighboring region (Miller et al., 2019; Bukowski and van den Heever, 2020) as part of the Holistic Analysis of Aerosols in Littoral Environments Multidisciplinary University Research Initiative (HAALE MURI). Figure 9 shows the result of this simulation from in situ vantage points just offshore from Qatar in the Persian Gulf at altitudes of 4 km and 20 m above sea level. With the higher vantage point we are above most of the atmospheric dust present in this case, so the sky looks bluer with the Rayleigh instead of Mie scattering being more dominant.
Figure 5 shows a space-based perspective of the December 2017 wildfires in southern California using NWP data from the HRRR-Smoke system. Smoke plumes from fires and areas of inland snow cover are readily visible. SWIm has been most thoroughly tested with another NWP system called the Local Analysis and Prediction System (LAPS; Albers et al., 1996; Jiang et al., 2015). LAPS produces very rapid (5 min) updates and very high resolution (e.g., 500 m) analyses and forecasts of 3D fields of cloud and hydrometeor variables. The LAPS cloud analysis is a largely sequential data insertion procedure that ingests satellite imagery (including infrared – IR – and 500 m resolution visible imagery updated every 5 min), ground-based cloud cover and height reports, radar data, and aircraft observations along with a first-guess forecast. This scheme is being updated with a 3D and 4D variational (3D- and 4D-Var) cloud analysis module that in the future will also be used in other fine-scale data assimilation systems.
Figure 7 depicts a simulated panoramic view from the perspective of an airplane cockpit at 1 km altitude using LAPS analysis with 500 m horizontal resolution. This is part of an animation designed to show how SWIm can be used in a flight simulator for aviation purposes. This visualization uses subgrid-scale terrain albedo derived from USDA 70 cm resolution airborne photography acquired at a different time. SWIm has also been used to display LAPS-initialized WRF forecasts of severe convection (Jiang et al., 2015) showing a case with a tornadic supercell that produced a strong tornado striking Moore, Oklahoma, in 2013.
Simulated images and animations from a variety of vantage points (on the ground, in the air, or in space, i.e., with multispectral visible satellite data) can be used by developers to assess and improve the performance of numerical model and data assimilation techniques. A subjective comparison of simulated imagery against actual camera images serves as a qualitative validation of both the model fields and the visualization package itself. If simulated imagery can reproduce observed images well under a representative range of weather and environmental conditions, this is an indication of the realism of the radiative transfer/visualization package (i.e., SWIm). Discrepancies between simulated and observed images in other cases may be due to shortcomings in the analyzed or model forecast states.
Comparing analyses from LAPS with daytime and nighttime camera images under cloudy, precipitating, and clear or polluted air conditions, SWIm was tested and can realistically reproduce various atmospheric phenomena (Albers and Toth, 2018). Since camera images are not yet used as observational input in LAPS, subjective and quantitative comparisons of high-resolution observed and Simulated Weather Imagery provide a valuable opportunity to assess the quality of cloud analyses and forecasts from various NWP systems, including LAPS, gridded statistical interpolation (GSI; Kleist et al., 2009), HRRR, the Flow-following finite-volume Icosahedral Model (FIM; Bleck et al., 2015), and the NAVGEM.
List of SWIm validation methods being developed.
Presented in either a polar or cylindrical
projection, 360
Comparison of observed (right) to simulated (left) polar equidistant projection images showing the upward looking hemisphere from a ground-based location in Golden, Colorado, on 27 September 2018 at 22:50 UTC. LAPS analysis fields are used for the simulated image.
Figure 10 shows a comparison between a simulated and a camera-observed
all-sky image valid at the same time. The simulated image was derived from a
500 m horizontal-resolution, 5 min update cycle LAPS cloud analysis. Assuming
realistic ray tracing and visualization, the comparison provides an
independent validation of the analysis. In this case we see locations of
features within a thin, high cloud deck are reasonably well placed.
Variations in simulated and observed cloud opacity (and optical thickness)
are also reasonably well matched. This is evidenced by the intensity of the
light scattering through the clouds relative to the surrounding blue sky as
well as the size (and shape) of the brighter aureole closely surrounding the
sun. The brightness scaling being used for both images influences the
apparent size of the inner bright (saturated) part of the solar aureole in
the imagery. This saturation can occur either from forward scattering of the
light by clouds and aerosols or from lens flare. The size also varies with
cloud optical thickness and reaches a maximum angular radius at
A comparison of aerosols at 21:00 UTC on 20 August 2018 in Golden,
Colorado, showing a panoramic simulated (top) and an all-sky camera image
(bottom). The correlation
It is also possible to compare simulated and camera images to validate
gridded fields of model aerosol variables. In particular, the effects of
constituents other than clouds, such as haze, smoke, or other dry aerosols, on
visibility under conditions analyzed or forecast by NWP systems can also be
instantly seen in SWIm imagery (Albers and Toth, 2018). Analogous to Fig. 10
(except its panoramic projection), Fig. 11 shows a cloud-free sky
comparison where aerosol loading was relatively high due to smoke. LAPS uses
a simple 1D aerosol analysis for a smoky day in Boulder, Colorado, when the
aerosol optical depth (AOD) was measured by a nearby AERONET station to be 0.7. The area within
Alternatively, solar irradiance computed by a solid angle integration of SWIm imagery has been compared (initially via case studies) with corresponding pyranometer measurements (Fig. 10). Qualitative comparison of the land surface state including snow cover and illumination can be compared with camera observations (not shown).
Side-by-side comparison of global cloud coverage viewed from space at approximately 18:00 UTC on 28 April 2019 as provided by DSCOVR EPIC (camera-observed image, right), analyzed by LAPS (21 km horizontal resolution), and visualized by SWIm (simulated image, left).
For space-based satellite imagery, color images can be compared qualitatively, and visible-band reflectance can be used for quantitative comparisons.
Figure 12 shows observed imagery from Earth Polychromatic Imaging Camera (EPIC) imagery aboard the Deep Space Climate Observatory (DSCOVR; Marshak et al., 2018) satellite. This imagery is used as independent validation in a comparison with an image simulated by SWIm from a global LAPS (G-LAPS) analysis. The DSCOVR imagery was empirically reduced in contrast to represent the same linear brightness (image gamma; Sect. 3.8) relationships used in SWIm processing. The LAPS analysis comprises 3D hydrometeor fields (four species) at 21 km resolution in addition to other state and surface variables such as snow and ice cover. Visible and IR satellite imagery is utilized from GOES-16 and GOES-17, with first-guess fields from a Global Forecast System (GFS) forecast, an operational model run by the National Oceanic and Atmospheric Administration (NOAA).
The horizontal location and relative brightness of the simulated vs.
observed clouds match fairly closely in the comparison for many different
cloud systems over the Western Hemisphere. The land surface spectral albedo
also appears to be in good agreement, including areas of snow north of the
Great Lakes. The sun glint model in SWIm shows the enhanced brightness
surrounding the nominal specular reflection point in the ocean areas
surrounding the Yucatán Peninsula due to sunlight reflecting from waves
assumed to have a normal slope distribution. This can help with the evaluation
of a coupled wind and ocean wave model. There is some difference in feature
contrast due to a combination of cloud hydrometeor analysis (e.g., the
brightest clouds in central North America) and SWIm reflectance calculation
errors as well as uncertainty in the brightness scaling of the DSCOVR
imagery along with uncertainties in the snow albedo used in SWIm over
vegetated terrain. The EPIC imagery shown was obtained from the displayed
EPIC web products with color algorithms unknown to the authors; thus a
better comparison could be performed using the radiance-calibrated EPIC
data, adjusted for earth rotation offsets for the three color channels. The
color image comparison is shown here to give an intuitive illustration of a
multispectral comparison. The reflectance factor distribution for both SWIm
and DSCOVR (now using the calibrated Level 1b radiance data) in a single channel
(the red band) matches anticipated values from 5 % in the darkest clear
oceanic areas to
MODIS-Aqua image (left) taken from passes at about 13:30 mean solar time over the Arabian Peninsula compared with SWIm visualization of a RAMS model forecast (right) from 10:00 UTC. Areas having predominantly dust, cloud liquid, and cloud ice are annotated in the images.
Figure 13 shows a comparison of color images over the Arabian Peninsula and over the Persian Gulf as generated from MODIS-Aqua observations and via SWIm simulation from a RAMS model forecast. Various environmental conditions such as lofted dust (near the Arabian Peninsula and over the Persian Gulf) as well as liquid (low) and ice (high) clouds can be seen. The microphysics and chemistry formulations in the RAMS model can be assessed and improved based on this comparison, such as minimizing an excess of cloud ice in the model simulation. The amount of dust east of Qatar over the water appears to be underrepresented in this model forecast.
In advanced validation and data assimilation applications (Sect. 4.3) an objective measure is needed for the comparison of observed and simulated imagery. For simple measures of similarity, cloud masks can be derived from both a SWIm and a corresponding camera image using for example sky color (e.g., red/blue intensity ratios). Categorical skill scores can then be used to assess the similarity of the angular or horizontal location of the clouds.
To assess the spatial coherence of image values (and thus radiances) between the
simulated and observed images, the Pearson correlation coefficient
SWIm image from a 3D LAPS cloud analysis using satellite data without camera input (top) is shown with a camera image (middle) and the SWIm image using 3D clouds, modified via a color ratio algorithm (bottom). The NREL camera image is from 24 May 2019 at 22:40 UTC.
In addition to feature characteristics and locations,
Today, NWP model forecasts predominate most weather prediction applications from the hourly to seasonal timescales. Fine-scale (up to 1 km) nowcasting in the 0–60 or 0–120 min time range is the notable exception. It cannot even be evaluated whether numerical models lack realism on such fine scales as relevant observations are sporadic, and no reliable 3D analyses are available on those scales, which would also be needed for successful predictions. No wonder: NWP forecasts are subpar compared with statistical or subjective methods in hazardous weather warning applications. It is a catch-22 situation: model development is hard without a good analysis, and quality analysis is challenging to do without a good model – this is the latest frontier of NWP development. The comparisons presented in Figs. 10 and 12 offer a glimmer of hope that model evaluation and initialization may one day be possible with advanced and computationally very efficient tools prototyped in a simple fashion with SWIm and LAPS as examples.
With new geostationary satellite instruments now available (e.g., ABI), an
abundance of high-resolution satellite data are available in spatial,
temporal, and spectral domains. As ground-based camera networks also become
more readily available we envision a unified assimilation of camera,
satellite, radar, and other, more traditional and new datasets in NWP
models. SWIm can be used with camera images (and possibly visible satellite
images) as a forward operator to constrain model fields in a variational
minimization. One approach entails the development and use of SWIm's
Jacobian or adjoint, while other techniques employ recursive minimization.
Vukicevic et al. (2004) and Polkinghorne and Vukicevic (2011) proposed to
assimilate infrared and visible satellite data using 3D and 4D variational (3D- and 4D-Var) data assimilation methods.
Likewise, observed camera images can also be assimilated within a 3D- or 4D-Var
cloud analysis module. Such capabilities may be useful in NWP systems such
as GSI, the Joint Effort for Data assimilation Integration (JEDI;
SWIm can be used in conjunction with other forward operators (such as the CRTM and SHDOM) to compare simulated with observational ground-, air-, or space-based camera data in various wavelengths or applications. Along with additional types of observations (e.g., radar, meteorological aerodrome reports) and model physical, statistical, and dynamical constraints (e.g., using the Jacobian or adjoint), a more complete 3D and 4D variational assimilation scheme can be constructed to initialize very fine scale cloud resolving models. Such initial conditions may be more consistent with full-resolution radar and satellite data. Note that on the coarser, synoptic, and subsynoptic scales, adjoint-based 4D variational data assimilation (DA) methods such as that developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) proved superior to alternative, ensemble-based DA formulations. The authors are not aware of any credible arguments for why this would not also be the case for cloud-scale initialization.
A variational 3D tomographic analysis highlighting precipitating hydrometeors was performed with airborne passive microwave observations (Zhou et al., 2014).
In recent years several groups have experimented with extraction and use of cloud information from camera images. An example solving for a 3D cloud mask using a ground-based camera network is discussed in Viekherman et al. (2014). This has been expanded using airborne camera image radiances to perform a 3D cloud liquid analysis (Levis et al., 2015) using a similar forward operator (SHDOM) in a variational solver using a recursive minimization. A corresponding aerosol observing system simulation experiment (OSSE) analysis (Aides et al., 2013) was also performed with a ground-based camera network. A design for tomographic camera-based cloud analysis has more recently been developed (Mejia et al., 2018).
As an initial nonvariational test, the authors experimented with the use of
the
Since SWIm operates in three dimensions and considers multiple scattering of visible light photons within clouds, it can help perform a 3D tomographic cloud analysis. To move towards the goal of comparing observed and simulated absolute radiance values in a variational setting, two strategies are being considered. The first strategy would entail more precise calibration of camera exposure and contrast, so images can be directly compared using a root mean square statistic. A second strategy entails using the simulated image to estimate global horizontal irradiance (GHI; Sect. 3.1) and then comparing with a GHI measurement made with a pyranometer collocated with the camera.
To make SWIm more generally applicable, its ray tracing algorithms have been
extended to address simulations with various light sources, optical
phenomena (e.g., rainbows), and twilight colors (to be reported in future
publications). Current SWIm development is focused on aerosol optical
properties and multiple scattering. Ongoing work also includes refinements
to the single-scattering albedo and the phase function for various types of
aerosols, including dust and smoke. The parameterization being used to
determine effective multiple-scattering albedo
A fast 3D radiative transfer model using visible wavelengths with a corresponding visualization package called Simulated Weather Imagery (SWIm) has been presented.
As summarized in Table 1, SWIm produces radiances in a wide variety of situations involving sky conditions, light sources, and vantage points. Even though other packages are more rigorous for particular situations they are designed for, that comes at a significantly higher computational cost. The visually realistic SWIm color imagery of weather and land surface conditions makes the complex and abstract 3D NWP analyses and forecasts from which it is simulated perceptually accessible, facilitating both subjective and objective assessment of NWP products. Initial use of SWIm has emphasized its role as a realistic visualization tool. Ongoing development and evaluation will allow SWIm to be used in a more quantitative manner in an increasing variety of situations. To date the evaluation has focused mainly on comparisons with ground-based cameras, pyranometers, and DSCOVR imagery even though the comparisons typically include the LAPS cloud analysis used for SWIm input in the evaluation pipeline. Specific comparison with other radiative transfer packages (e.g., CRTM, MYSTIC) is a good topic for future work.
Validation of SWIm is summarized in Table 4 and consists of both qualitative and quantitative assessment. The quality of the hydrometeor and aerosol analysis plays a role, making these joint comparisons of SWIm and the analysis techniques. Additional quantitative validation is planned to compare SWIm with other 1D and 3D radiative transfer models in a manner that is more independent of analysis quality.
Simulated time-lapse sky camera views for both recent and future weather can
be used, for example, for the interpretation and communication of weather
information to the public (an archive of near real-time examples is available
at
A critical use of camera images in the future will be their variational assimilation into high-resolution analysis states for the initialization of NWP forecasts used in Warn-on-Forecast systems (Stensrud et al., 2013). The comparison of high-quality ground-, air-, or space-based camera imagery with their simulated counterparts is a critical first step in the assimilation of such observations. The assimilation of such gap-filling observations can be especially useful in preconvective environments where cumulus clouds are present and radar echoes have yet to develop. Today's DA techniques suffer in such situations, severely limiting the predictability of tornadoes and other high-impact events. Four-dimensional variational tomographic DA is designed to combine camera and satellite imagery from multiple viewpoints. The sensitive dependence of multiple scattering in 3D visible-wavelength light propagation on the type and distribution of hydrometeors facilitates a better initialization of cloud properties throughout the depth of the clouds. This in turn can potentially extend the time span of predictability for severe weather events from the current period, starting with the emergence of organized radar echoes, back to the more subtle beginnings of cloud formation. As the spatiotemporal and spectral resolution of color imagery observed with ground-based cameras as well as airborne and satellite-borne instruments, and as corresponding output from NWP models reaches unprecedented highs, the question arises of whether variational or other DA methods can sensibly combine information from the two sources. If they can, consistent analyses of clouds and related precipitation and aerosol fields will aid situational awareness and fine-scale model initialization. SWIm used as a 3D forward operator for camera and visible satellite imagery may help address the above and related challenges.
The Arabian Peninsula case is calculated using the representative dust model
derived as follows from the Capo Verde site in the AERONET network (Holben
et al., 1998). We applied the EPA positive matrix factorization (PMF) 5.0
model (available at
Optical source profile for the Capo Verde dataset.
Average normalized volume size distribution for dust-dominated days in the Capo Verde dataset.
For multiple scattering for hydrometeors beyond cloud liquid we follow a
procedure similar to that described in Sect. 3.4.2 with these primary
differences. For the rain phase function we specify via Eq. (13) a
parameterization for multiple scattering. The optically thin rain component
is given here:
For cloud ice, the optically thin component is given by
The RAMS model code, name lists, and documentation required for reproducing simulation data are available at the following URL:
SA developed the SWIm concept and software and wrote the majority of the manuscript. SMS and QB performed and postprocessed RAMS model runs, while SK authored Appendix A. PX postprocessed and provided NAVGEM data. RA and EJ conducted the HRRR-Smoke simulations and postprocessed the output. ZT wrote several sections and assisted with conceptualization and experimental design. SM provided project management and advice on manuscript development.
The authors declare that they have no conflict of interest.
This article is part of the special issue “Holistic Analysis of Aerosol in Littoral Environments – A Multidisciplinary University Research Initiative” (ACP/AMT interjournal special issue). It does not belong to a conference.
This work was partially funded by a Multidisciplinary University Research Initiative (MURI) called Holistic Analysis of Aerosols in Littoral Environments (HAALE). For the HAALE MURI project the support of the Office of Naval Research is gratefully acknowledged. Additional funding was provided by the NOAA under the Cooperative Institute for Research in the Atmosphere (CIRA). Ravan Ahmadov and Eric James thank NOAA's JPSS PGRR program for funding and the rest of the HRRR-Smoke team and collaborators for helping with the model development. We thank Didier Tanre and the AERONET team for establishing and maintaining the Capo Verde AERONET site used in this investigation. We also thank Afshin Andreas and Mark Kutchenreider of the National Renewable Energy Laboratory (NREL) in Golden, Colorado, along with Will Beuttell of EKO Instruments Inc. for help in accessing their real-time all-sky camera images. We appreciate the helpful feedback and suggestions provided by two anonymous reviewers and the editor, Sebastian Schmidt.
This research has been supported by the Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) program (grant no. N00014-16-1-2040).
This paper was edited by Sebastian Schmidt and reviewed by two anonymous referees.