Introduction
The Arctic
region, often viewed as an early indicator of climate change, has been
recently undergoing major alterations including alarmingly increasing
temperatures, retreating sea-ice cover and record low-ozone concentrations in
the winter (Duarte et al., 2012; Manney et al., 2011; Moritz et al., 2002;
Wang and Key, 2003). The current general circulation models (GCMs)
underestimate the rate of sea-ice decline (Stroeve et al., 2011) and might
differ substantially in terms of their projections (Kattsov and
Källén, 2005). The differences between observations and model
simulations and the scatter among models are due to the uncertainties in the
underlying physical processes. In particular, the lack of understanding
associated with a complexity of aerosol and cloud processes remains one of
the major obstacles in accurately reproducing and predicting the Arctic
climate (Inoue et al., 2006; Kattsov and Källén, 2005).
Aerosols can directly reduce the incoming shortwave radiation reaching the
surface. Important examples in the Arctic include the effects of transported
biomass burning, forest fire and volcanic plumes (e.g. Stone et al., 2008;
Engvall et al., 2009; Young et al., 2012). In addition, aerosols play a
profound indirect role serving as condensation nuclei for new clouds and
modifying properties of already existing clouds. Understanding the nucleating
role of aerosols in mixed-phase-type clouds, for example, remains an
important research problem in terms of the climate impact of Arctic aerosols
(McFarquhar et al., 2011; Prenni et al., 2007; Verlinde et al., 2007). For a
particular atmospheric state, the net aerosol radiative effect depends on the
aerosol type, size, plume height as well as underlying surface albedo and
available short-wave radiation.
Because of its unique conditions, the Arctic has been an area of intense
interest for aerosol studies. The multi-month daylight and darkness periods,
isolated air masses and distinct temperature and humidity regimes result in
complex and climatologically important atmospheric phenomena. At the same
time, the availability of data, even simple meteorological measurements, is
severely limited in the Arctic because of its remoteness and harshness. As a
consequence, there are fewer than a dozen permanent Arctic stations with a
continuous track of aerosol measurements. This record is augmented by
intensive field campaigns with particular objectives concerning aerosols and
aerosol-cloud interactions such as SHEBA/FIRE-ACE (October 1997–October
1998, Curry et al., 2000; Uttal et al., 2002), ASTAR (March–April 2000,
Yamanouchi et al., 2005), ARCTAS (April, June–July 2008, Jacob et al.,
2010), ISDAC (April 2008, McFarquhar et al., 2011), ARCPAC (April 2008, Brock
et al., 2011) and POLARCAT (June–July 2008, Schmale et al., 2011).
The synergy of ground-based sunphotometer and lidar instruments has proven to
be very effective in the analysis of day-time aerosol measurements.
Sunphotometers (Shaw, 1983), based on the extinction of solar radiation,
provide aerosol optical depth (AOD). AOD is an indicator of total aerosol
column concentration and is the most important aerosol radiative parameter. A
sunphotometer measures AOD in multiple channels and yields an estimation of
particle abundance as well as aerosol size indicators (effective radius,
reff of submicron and supermicron modes for example) from the
spectral information (O'Neill et al., 2003). Lidars measure time-gated
returns of radiation backscattered by atmospheric particles: one can obtain
vertical profiles of aerosol and cloud extinction and backscattering
coefficient based on the time difference between the emitted and
backscattered laser pulses (for a basic lidar principle see, for example,
Carswell, 1983). Lidars also provide an indication of particle size from
spectral channels and particle shape via the depolarization channels. The
combined use of sunphotometers and lidar, accompanied by supplementary
backward trajectories, satellite and other data, has been successfully
applied to characterize Arctic aerosol events during the summer time (see,
e.g., O'Neill et al., 2008a; Hoffmann et al., 2010; Stock et al.,
2012).
The occurrence and characteristics of aerosols during the polar winter,
however, have been studied to a much lesser extent. The radiation budget
during this period is determined by longwave fluxes, which result in surface
cooling and strong temperature inversions (Bradley et al., 1992). The end
result is a very stable lower troposphere that hinders vertical heat and
moisture transfer. It also reduces the aerosol deposition rate (e.g. Quinn
et al., 2007). The polar winter is also associated with cloudless ice
crystal precipitation, commonly termed “diamond dust”. Contrary to initial
conclusions (Curry et al., 1990), later studies suggest that diamond dust
exhibits a negligible radiative effect (Intrieri and Shupe, 2004). However,
reports on diamond dust occurrence and microphysical properties in the Arctic
are very scarce. Furthermore, surrounding topography can have an important
impact on the production of ice crystals. At the Eureka station, in the High
Canadian Arctic, ice crystals are reported frequently during the winter
period. Lesins et al. (2009) show that at least some of these ice crystals
are due to the advection of snow from nearby ridges. Crystals formed in this
fashion will exert a different radiative influence compared to classical
diamond dust. A better characterization of polar winter atmospheric phenomena
and aerosols in particular represents an important step towards a more
comprehensive year-round view of Arctic processes.
One of the principal shortcomings of polar winter aerosol monitoring is the
absence of AOD measurements. Star photometry and moon photometry, based,
respectively, on the radiation from bright stars and the Moon, have
consequently emerged as possible solutions to the problem. Recent studies
show the potential of moon photometry measurements using sunphotometer-type
instruments (Barreto et al., 2013; Berkoff et al., 2011). Despite inherent
problems such as changing lunar brightness, moon photometry can currently
provide AODs near full moon (Berkoff et al., 2011). The lunar cycle, however,
limits the number of observations down to 30–40 % of typical solar
measurement observation numbers. Leiterer et al. (1995) introduced star
photometry techniques based on extinction of bright-star radiation as a means
of generating consistent and regular nighttime AOD measurements. Herber et
al. (2002) successfully used a combination of sun- and star photometry to
study multi-year AOD dynamics at Ny-Ålesund in the High Arctic. This work
was based on daily AOD averages and did not focus on individual events or
process-level sub-diurnal variations. Furthermore, no coincident lidar data were
available for the study period. Alados-Arboledas et al. (2011) showed the
feasibility of combining star photometry and lidar data to study fresh
biomass burning at mid-latitudes. No similar studies of simultaneously
operating star photometers and lidars in the Arctic during the polar winter
are currently available.
In 2011 an SPSTAR star photometer joined a Raman lidar as a part of the
extensive instrumental suite for atmospheric measurements at the PEARL (Polar
Environment Atmospheric Research Laboratory) in the High Canadian Arctic
(80∘ N, 86∘ W). Measurements were acquired at the sea-level
site called 0PAL (Zero Altitude PEARL Auxilary Laboratory). During the spring
of 2011 and 2012 both instruments were operated in tandem in order to study
optical properties of aerosols and thin clouds. The purpose of the current
paper is to show the capabilities of star photometer–lidar synergy in the
Arctic as a tool for characterizing polar winter phenomena in terms of their
optical properties. While both instruments are discussed, the focus of the
work is on star photometry with additional details on lidar analysis given
elsewhere. We present a process-level analysis of several events that were
detected and studied using the combination of the two instruments. This
event-based approach is essential to understanding the physics of underlying
processes and should precede any statistical or climatological analysis. The
results obtained are also important for validating CALIOP (Cloud-Aerosol
LIdar with Orthogonal Polarization) space-borne lidar observations acquired
during the polar winter and, alternatively, for giving a spatial context to
ground-based lidar and star photometer observations. The paper is structured
as follows: Sect. 2 presents the description of the PEARL measurement site,
Sect. 3 gives a brief technical overview of instrumentation, Sect. 4 contains
important information on data processing and error analysis, while Sect. 5
describes the principal results obtained within the context of the current
work. Finally, Sect. 6 serves as a summary with a review of the main
findings.
Data processing
Star photometry data processing
Calculation of star magnitudes
Star photometry, like astronomy, uses logarithms of the measured star flux
signal to compute star magnitudes.
If CN is the number of counts (proportional to the incoming flux) for a
particular star measured by the star photometer then the associated star
magnitude M is defined as follows:
M=-2.5⋅log10CN.
In reality, the star photometer takes a series of flux measurements (usually
five) of both a star and background immediately in the vicinity of the star. The
CN value used in calculating the star magnitude (Eq. ) is the
difference between the mean star count (SC) and background count (HC):
CN=SN-HC.
Measurement principle
The attenuation of solar light passing through the atmosphere can be
expressed via Beer–Lambert's law (Shaw et al., 1973):
I(z)=I0e-mτ(z),
where I(z) is the solar irradiance as measured on the ground, I0 is the
extraterrestrial solar irradiance, m is the air mass (e.g. Thomason et al.,
1983) and τ is the total optical depth. In this work the term “air
mass” refers to the optical air mass rather than synoptic air mass.
The value of τ can be decomposed as follows:
τ=τray+τaer+τO3+τNO2+τH2O,
where τray is the optical depth of molecular scattering
(Rayleigh scattering), τaer is the optical depth due to aerosols
(AOD) and τO3, τH2O,
τNO2 are the optical depths due to absorption by ozone,
water vapor, and nitrogen dioxide respectively.
In star photometry, the Beer–Lambert law, expressed in terms of stellar
magnitude, becomes the following (Leiterer et al., 1995):
M=M0+1.086τm,
where M is the measured magnitude on the ground (Eq. ), M0 is
the extra-terrestrial instrumental magnitude, and we have assumed, as per the
previous section, that flux (irradiance) and the number of counts are
proportional. The factor of 1.086 in Eq. () comes from the product
2.5log10e. Two measurement methods are currently used in
star photometry: a two-star method (TSM) and a one-star method (OSM) which is
an analogue to classical sun photometry.
Two-star method (TSM)
The two-star method is a relative approach that does not require calibration
values. Applying Eq. () to each of the two stars and rearranging
yields the following (after Leiterer et al., 1995):
τ=11.086M1-M2-(M01-M02)m1-m2.
The indices refer to the two stars (also termed “low” and “high” stars
referring to their relative elevation) in the same part of the sky for which
the air mass difference is sufficiently large (Δm≥1, where Δm=m1-m2). Assuming that the magnitude difference is the same
irrespective of the measurement instrument M01-M02=M01∗-M02∗, where M0∗ refers to the extraterrestrial
magnitudes taken from the astronomical catalogue of Alekseeva et al. (1996).
Equation (6) can then be rewritten in the following form:
τ=11.086M1-M2-(M01∗-M02∗)m1-m2.
The star photometer constantly alternates between the two stars, providing AOD
values every 5–6 min depending on the length of the star centering
procedure. TSM can be prone to significant point-to-point variations if the
atmosphere is not homogeneous (Baibakov, 2009).
CRL lidar receiver optical specifications. Based on Table 1 of Nott
et al. (2012).
Channel
Interference filter center wavelength (nm)
Principal measurement
UV elastic
354.72
aerosols and clouds
UV N2
386.67
aerosols and clouds, water vapor
UV H2O
407.52
water vapor
Visible elastic
532.08
aerosols and clouds
Visible N2
607.46
aerosols and clouds
Visible depolarization
532
aerosols and clouds
Rotational Raman
Low J
531.16
atmospheric temperature
High J
528.63
atmospheric temperature
Measurement stars most frequently used in polar winter
star photometry data sets at Eureka in 2010–2011 and 2011–2012.
Star (Harvard Revised
Star name
Constellation
Fraction of total
Fraction of total
Photometry Catalogue)
measurements in 2010–2011 (%)
measurements in 2011–2012 (%)
4295
Merak
β UMa
79
26
7001
Vega
α Lyr
13
40
2421
Alhena
γ Gem
5
0
5191
Alkaid
η UMa
3
27
One-star method (OSM)
Given a value of M0 (see Sect. 4.2 on calibration), one can calculate
the optical depth, τ, for one star:
τ=M-M01.086m.
The OSM temporal resolution is 2–3 min. This method is also operationally
simpler than the TSM, as only one star needs to be continually followed. The
measurement accuracy of the extraterrestrial magnitudes for all wavelength
channels ultimately determines the accuracy of the OSM AODs.
Measurement stars and air mass range
The results presented in this paper are based on the polar winter data sets
obtained at Eureka in November 2010 and December 2011. The most frequently
used measurement stars are listed in Table 2. The air mass range was between
1 and 2 for 95 % of the measurements acquired in 2010–2011 and for
99 % in 2011–2012.
Spectral deconvolution algorithm (SDA) processing
The star photometer AOD spectra were transformed into estimates of fine and
coarse mode optical depth at a reference wavelength of 500 nm via the
spectral deconvolution algorithm (SDA) of O'Neill et al. (2003). This method
(whose output is also an AERONET (Aerosol Robotic Network) product) was
employed, for example, by O'Neill et al. (2008a) and Saha et al. (2010) to
analyze co-located sunphotometer and lidar data at Eureka and other Arctic
stations. Its basic premise, that aerosol (and cloud) optics are largely
driven by independent fine and coarse mode particle size distributions,
permits a more fundamental understanding of both optical depths, lidar
backscatter profiles and the link between the two. An error model defining
the retrieval errors associated with SDA is given in the Appendix of O'Neill
et al. (2003). A later technical memo (O'Neill et al., 2008b) corrects some
errors in the equations given in the original paper. The component errors of
the fine and coarse mode optical depths are driven by the nominal optical
depth calibration error determined by AERONET personnel for all field
instruments (see Holben et al., 1998 for a discussion of the AERONET
calibration protocol). The fine and coarse mode retrieval errors are
generally greater than or of the order of the approximate nominal calibration
error. The SDA was applied to star photometry data for bands in the
419.9–862.3 nm wavelength range.
Cloud screening of the star photometer data
Photometry data needs to be routinely cloud screened to yield aerosol trends.
Smirnov et al. (2000) describe an algorithm based on temporal AOD variations
used in the AERONET global sun photometry network. Similarly,
Pérez-Ramírez et al. (2012) apply temporal cloud screening
procedures (such as a moving average test) to star photometry data sets.
While this latter algorithm provides a consistent method to remove
cloud-contaminated points, the approach and the necessary thresholds should
be adapted based on the data set (D. Pérez-Ramírez, personal
communication, 2012). We expect, for example, that Arctic aerosol phenomena
will be weaker in magnitude than those at mid-latitudes.
Cloud filter protocol employed in this work. The three filters of
the table are meant to be employed sequentially.
Filter name
Condition
Description
1. Range
0<τ<0.35
AOD values should lie within a climatologically defined range. All the points outside the range are removed.
2. Moving slope
a≤0.001 min-1
The time of each measurement is taken as a middle of a 1 h interval. The point is eliminated if the slope of the linear fit (y=at+b) for all measurements contained in the 1 h interval exceeds an empirically chosen threshold.
3. Outliers
τ-τavrg<2.5σ
A point is eliminated, if its difference relative to the average value for the whole night exceeds 2.5 standard deviations (σ). The procedure is repeated until all the differences are within 2.5σ.
The filters employed in this work are described in Table 3 and partially
mimic the methodology proposed by Smirnov et al. (2000) and
Pérez-Ramírez et al. (2012). For the range condition, we have
eliminated all negative AOD values as well as AODs higher than 0.35 at
500 nm. The threshold of 0.35 was chosen as an upper Arctic-AOD-bound based
on the statistics of Herber et al. (2002) and Tomasi et al. (2007). Clouds
are significantly more variable in time than aerosols: hence one of the main
cloud filtering tests is an AOD temporal derivative. Smirnov et al. (2000)
defined a “triplet stability criterion” that employs three measurements
taken 30 s apart over a total of a 1-min period. For a cloud-free
atmosphere, the difference between the maximum and the minimum AODs should
not exceed 0.02, i.e. (τmax-τmin)<0.02, over this
short time interval. However, there is no analogue to a triplet sampling rate
of 30 s-1 for the Eureka star photometer: measurements can only be
acquired at a sampling rate of 3–10 min-1. Instead,
Pérez-Ramírez et al. (2012) used an absolute difference of 0.03
between two consecutive AOD values (obtained, on average, every 5 min) as a
filtering condition, which essentially amounts to a rejection criterion of
|dτ/dt|>0.006 min-1. This criterion turned out to
be effective for many cloud scenes (except, of course, for
temporally/spatially homogeneous clouds). The moving slope (which is
effectively a time derivative computed from an hour-long regression about
each optical depth measurement), and the pair-wise time derivative filters
are similar and perform comparably, but the former is also sensitive to
homogeneous clouds of moderate duration (1–1.5 h duration). The pair-wise
temporal derivative would not, on average, be sensitive to such variations
since its decision protocol is limited to the (usually shorter) temporal
range between any two measurements. We found that the empirically chosen 1h
period for the moving slope filter as well as the choice of 0.001 min-1
for the slope threshold performed well for the star photometry data sets. The
moving slope threshold of 0.001 min-1 is considerably less than the
0.006 min-1 threshold employed for the pair-wise time derivative: this
is meant to make up for the loss of high frequency sensitivity brought about
by the regression over an hour. Additionally, 1 hour optical depth difference
filtering is used by Pérez-Ramírez et al. (2012) to avoid the
inclusion of any outliers (while we depend on an AERONET type of (nightly)
outlier filter defined in Table 3).
The application of the outliers filters listed above, amounted to our
cloud-screening test adapted to polar winter star photometry: one presumes
that outliers are very likely to be clouds because of the high frequency
variations associated with the latter. We have adjusted the threshold from
3σ of Smirnov et al. (2000) and Pérez-Ramírez et al. (2012)
down to 2.5σ given the observed variations in AOD.
It is expected that each filtering condition will have its own drawbacks.
For example, the outliers filter will be dependent on the fraction of the
cloud-free points in the time series, i.e. if the mean AOD value is too
high, some cloud-contaminated values will be left in. When applied
consecutively, however, we have found that most of the high-frequency
variations associated with what we interpret as cloud features are removed.
Temporal cloud screening, nevertheless, can not eliminate homogeneous clouds
with small point-to-point variations, nor can it avoid eliminating highly
variable aerosol events such as the incursion of a strong (fine mode) smoke
plume (O'Neill et al., 2003). A way to check the performance of the cloud
filtering is to use the available spectral information to distinguish between
clouds and aerosols (ibid). In fact, the coarse mode of the SDA is, for most
Arctic cases, associated with large super-micron cloud particles
.
If aerosol optics are dominated by fine mode aerosols (as they are in the
Arctic) then the application of the method results in a de facto cloud
screening algorithm whose output can be compared (or combined) with a
temporal cloud screening algorithm. Quantitatively, one can evaluate the
root-mean square difference, δflt,RMS, between the fine mode
AOD, τf and the temporally cloud-filtered AOD,
τflt:
δflt,RMS=1N∑(τf-τflt)2,
where N is the total number of points in a time series.
We also compared the performance of the cloud filtering procedure with the
lidar vertical profiles. In many cases, clouds tend to greatly enhance (and
sometimes saturate) the lidar backscatter return. Evaluating the vertically
integrated lidar signal (lidar optical depth) relative to the τflt (while being able to visually confirm the presence of cloud from
its typically unique appearance as a high frequency, high intensity
perturbation in the backscatter coefficient profile) is thus a natural way to
ensure the quality of cloud screening.
The cloud-screening algorithm presented here was predominantly developed for
constructing intra-annual and inter-annual aerosol climatologies. With the
exception of Sect. , we chose not to use cloud screening
for the process-level (minutes timescale) analysis presented in Sect. 5, in
order to preserve AOD variations in coarse mode dominant events.
Star photometry calibration
A more detailed treatment of star photometer calibration is left to
Ivanescu (2015). Here we present only a brief discussion.
Despite the obvious advantage of the TSM not requiring a star photometer
calibration, the OSM is considered to be the main operational method. The OSM
does not necessitate atmospheric homogeneity and has a higher sampling rate.
Furthermore, A. Gröschke (unpublished data) argues that the accuracy and
error analysis is not straightforward for the TSM, given its differential
nature.
In order to make measurements with the OSM or extract individual AODs related
to the low and high stars of the TSM, one needs to derive extraterrestrial
star magnitudes, i.e. magnitudes that a star photometer would measure outside
of the atmosphere (M0 in Eq. 8). This can be done either by using
Langley-type procedures (Shaw et al., 1973) or by calculations from the TSM
data. Langley calibration in the Arctic, however, is problematic as it takes
many hours for some of the measurement stars to go through a sufficient
optical air mass change (Herber et al., 2002). This results in variable
measurement conditions and, correspondingly, calibration inaccuracies.
Consequently, calibration using a priori acquired TSM data is the de facto
calibration method in the Arctic.
Extra-terrestrial star magnitudes can be calculated from TSM data using
Eq. (). Theoretically, only one TSM point is needed to derive
M0 for a particular star. In practice however, one has to analyze at
least several nights of measurements, and preferably the entire data set, to
ensure the consistency and stability of the calibration values
(A. Gröschke, unpublished data). The problem with Eq. () is that
the analysis has to be made separately for each measurement star, which is a
lengthy and tedious procedure. One solution is to use a procedure akin to the
“calibration transfer” proposed by Pérez-Ramírez et al. (2008a) in
which several additional stars are also measured during the calibration
process (either Langley or TSM). M0 for those stars can then be easily
calculated using Eq. () by assuming the average value of τ
obtained during the calibration procedure.
We employed the star catalogue transfer function or calibration constant,
C, to consolidate the ensemble of our multi-star measurements for
calibration purposes (Ivanescu, 2015). C is defined as
C=M0∗-M0.
In theory, this allows every TSM measurement to be used to derive calibration
values common to all stars. In practice, however, some potential calibration
values need to be removed because of the inherent variability in the TSM data
(due for example, to contamination by clouds, ice deposition on the optics
and instrumental temperature variability). In this work, we imposed the
following conditions for a point to qualify for calibration: (a) the point is
not marked as cloud by the cloud screening procedure and (b) the error
associated with the measurement (δτ), does not exceed a certain
threshold. In (b) we used δτ≤0.005 (significantly less than
the 0.01–0.02 accuracy expected for normal field measurements) as a
conservative threshold for ensuring good calibration conditions. The
0.01–0.02 range is, for example, often quoted for AERONET field instruments
(Eck et al., 1999). The resulting calibration values were chosen as averages
of the points satisfying all the criteria. The mean standard deviation of C
(δC) for the bands in the range 420–862 nm was 0.027. This
corresponds to a mean relative flux error of 0.025 (δF0/〈F0〉 of Appendix A, Eq. A13).
Estimation of AOD errors and uncertainties in star photometry measurements
Sources of calibration, measurement and processing errors
A variety of internal (related to the photometer itself) and external
(related to the environment and pointing accuracy) factors can result in
star photometer measurement errors and inconsistencies. Most of the
instrumental issues, such as detector linearity and temperature sensitivity
as well as dark current, are discussed in detail in Pérez-Ramírez et
al. (2008a, b) and A. Gröschke (unpublished data). Star photometry AOD
errors, nevertheless, can result from other sources. For example, TSM
measurements are sensitive to the horizontal homogeneity of the atmosphere
while the accuracy of the OSM measurements is directly dependent on the
quality of the calibration values. Furthermore, the AOD retrieved from some
of the SPSTAR visible bands can suffer from insufficiently accurate ozone
(and possibly NO2) correction, while the infrared channels can be
affected by water vapor absorption.
Setting aside the cases of the water-vapor sensitive near infrared (NIR)
channels (which we did not employ in this work) the most important gaseous
absorber in the visible spectra region is ozone. Using an estimated ozone
uncertainty of 31 DU (standard deviation from Eureka ozonesonde data) will
result in a corresponding standard deviation (δτ,ozone)
of 0.004 at 605 nm and 0.001 at 500 nm (assuming a random distribution in
ozone concentration errors). This is substantially less than the nominal star
photometry calibration error of δτ,cal=0.01 but is not
insignificant.
The value of NO2 optical depth that we employed for our NO2
corrections was τNO2=0.003. Measurements over Eureka during
the late polar winter of 2004 showed NO2 columnar abundances between
approximately 1.0 and 2.0×1015 molecules cm-2
(Kerzenmacher, 2005). This yields a range of τNO2 between
approximately 0.0005 and 0.001 for a nominal absorption cross section of 5×10-19 cm2 applied to wavelength channels from the UV to the
blue-green portion of the spectrum (O'Neill, 1999). A conservative estimate
of 100 % for the relative NO2 optical depth error (i.e. an
absolute error of 0.003) will encompass the late winter Eureka-based
estimates of τNO2.
The estimated error in the Rayleigh optical depth as given by Frohlich and
Shaw (1980) is 0.001 % for the wavelength range of 300–900 nm: this
yields a maximum Rayleigh optical depth error of 0.00043 at 380 nm. While
this may be a bit optimistic for the Arctic it is most likely of the correct
order of magnitude and therefore negligible compared to O3 and NO2
errors. Rayleigh optical depths are also pressure corrected: we roughly
estimate the uncertainty associated with the pressure correction to be on the
order of Frohlich and Shaw's 0.001 % relative error (∼1 hPa over
1013 hPa).
We acknowledge that retrieved AOD values can be affected by rapidly changing
optical air mass at very large zenith angles (or, consequently, small star
elevation angles). Russell et al. (2005) for example, discuss AOD
uncertainties for air masses of ∼30–40. However, because of the small
air mass values used in this work (between 1 and 2, see Sect. 4.1.5), we
consider uncertainties due to air mass calculations as negligible compared to
other sources of error.
Some of the other factors that might effect AOD measurements include
imprecision in star pointing and tracking (resulting in either underestimated
star signal or overcompensated background correction), vibrations due to
winds (> 8 m s-1), light pollution due to Moon or
artificial lightning and ice deposition on the telescope.
Estimated total error in τaer
From Eq. () the total AOD error, δτaer is a
function of the errors in all the component parameters employed in its
retrieval. Expressing Eq. () in terms of numerical counts, CN
and CN0, δτaer can be estimated as following
(see Appendix A for details):
δ(τaer)=1m2δ(CN0)〈CN0〉2+δ(CN)〈CN〉2+δ2(τO3)+δ2(τNO2)+δ2(τH2O),
where δ(CN0)〈CN0〉 is
the calibration error, δ(CN)〈CN〉 the measurement error, 〈CN0〉 and
〈CN〉 are the average values of CN and CN0 and
δ(τO3),δ(τNO2),δ(τH2O) the errors associated with the estimation of ozone,
NO2 and H2O optical depths respectively. This yields an OSM
error estimate of δ(τaer)=0.03 for a typical air mass
value of m=1.
AOD error due to incomplete cloud screening
The estimate of δτaer above is for the list of error
contributions that are readily quantified with some coarse degree of accuracy
(or they can be highly inaccurate but very small). It precludes
“catastrophic errors” such as significant ice condensation on the optics or
serious tracking errors in the star measurement or in the background
measurement modes. The oftentimes inadequate nature of temporal cloud
screening remains an error source which is highly variable. If we anticipate
the results of our spectral vs. temporal cloud screening comparison
(Sect. 5.5) in the presence of (spatially inhomogeneous) clouds whose
presence is readily filtered out (Fig. 9), then we can at least get out an
order-of-magnitude error associated with the shortcomings of temporal cloud
screening in the presence of optically thin clouds. Based on the RMS
computations for the illustrative case of Fig. 9 we obtain δτaer,post-cloud-screening⪅0.03, a number which
will be inflated by, for example, inaccuracies in the retrieval of aerosol
fine mode optical depth, τf, and the possible presence of thin
homogeneous cloud that escapes temporal cloud screening. This estimate is an
attempt to describe a worst case scenario: in the absence of competitive
coarse mode signal, δτaer,post-cloud-screening will be significantly smaller.
CRL processing
The lidar return contains information about the atmosphere in terms of the
backscatter and extinction coefficients, β(z) and κ(z). The
former describes how much light is scattered into the backward direction and
determines the strength of the return lidar signal from the sampling volume
at altitude z. The extinction coefficient describes the combined capacity
of all particles and molecules to diminish the laser beam intensity in the
sampling volume at altitude z. The profile of the extinction coefficient
between the receiver and the sampling volume acts to attenuate the outgoing
and return signal from the sampling volume at altitude z. Assuming that the
light is scattered mostly by air molecules (index “m”') and aerosols (index
“a”), β(z) can be expressed as follows:
β(z)=βm(z)+βa(z).
One distinguishes between elastic (Rayleigh) and inelastic (Raman)
scattering. In the former case, the frequency of the scattered photon is the
same as the frequency of the incident photon. Raman scattering (which a
Raman lidar such as CRL makes use of) changes the internal energy state of
specific types of molecules in the path of the beam. The resulting frequency
shift of the scattered photon can be used to separate molecules from
aerosols as the latter undergo only elastic scattering.
There are two techniques used for the purpose of determining the aerosol
backscatter coefficient for the CRL. The first is the Klett inversion (Klett,
1981), which is applied to the elastic scattering channel at 532 nm. The
second technique, the ratio technique (Ansmann and Müller, 2005), uses
the elastic scattering channel (532 nm) and a Nitrogen Raman scattering
channel (607 nm) to obtain profiles of backscatter coefficient. The Klett
inversion requires an estimation of the aerosol extinction to backscatter
ratio (or lidar ratio, Sa), which can be difficult to estimate. The
ratio technique however is much noisier due to the low scattering cross
section of Raman-scattered radiation.
To make meaningful comparisons between extinction based star photometer (whose
output is optical depth) and lidar, one needs an estimate of the lidar ratio
to convert the backscatter coefficient to extinction coefficient and
subsequently permit the integration of the latter vertical profile to obtain
lidar optical depth. An alternate ratio technique, by Ansmann et al. (1992),
which employs the transmission of the Raman channel to directly measure
extinction coefficient also suffers from the weak and noisy nature of the
Raman channel as well as the fact that a noise-sensitive vertical derivative
has to be applied to yield extinction coefficient.
A common issue with lidar monitoring is the incomplete overlap region. The
overlap region is defined as a region where the field of view of the
receiving system does not fully capture the backscatter from the transmitted
radiation. This will occur for a range of altitudes near the surface. By
combining both the Klett and ratio techniques, a correction can be applied to
the Klett inversion as shown by Wandinger and Ansmann (2002). The ratio
technique should not suffer from overlap effects due to the two detectors
(that measure the elastic and inelastic signal ratios being calculated)
theoretically having the same incomplete overlap region (idem). In reality,
however, this is not the case, and a correction is applied to the ratio
technique analysis by using “clear-sky” measurements (minimal aerosol and
cloud) from which the profile of aerosol backscatter would be weak. Applying
these overlap corrections allows the CRL to measure down to approximately
200 m for both techniques.
The CRL also measures linear depolarization ratio (DR) defined as a ratio of
the orthogonal and parallel components of the backscattered light. DR is
primarily dependent on particle habit (i.e. the gamut of possible shapes
between spherical particles and complex crystals) but can also be used as a
means of discriminating fine mode particles from coarse mode particles (see
for example O'Neill et al., 2012). Consequently, DR could potentially be used
for partial cloud/ice crystals screening validation. The CRL DR hardware and
processing algorithms, however, are still in development and so only sporadic
measurements are available. Only DR data obtained on 21 February 2011 were
used for the purposes of this paper.
Lidar optical depth computations
Simple threshold approach for aerosol/cloud discrimination
As a part of the analysis, we integrated the lidar profiles to calculate
lidar fine, coarse mode and total optical depths (we adopted the notation
whereby primed optical depths (τf′, τc′, τa′=τf′+τc′) are derived from lidar profiles,
whereas unprimed optical depths (τf, τc and τa=τf+τc) are derived from the star photometry
data). For the lidar components we assumed lidar ratios based on the
following binary fine/coarse classification scheme. Features with backscatter
coefficient values at 532 nm higher than a specific threshold
(β532,thr or simply βthr) were considered
clouds or ice crystals and assigned to a cloud/ice crystal class while all
other backscatter coefficient samples were classified as fine mode aerosols
(implicit in this latter assignment is the assumption that aerosol optical
activity is dominated by fine mode aerosols). Cloud/ice crystal samples were
assigned a lidar ratio value of Sc= 20 sr. This value is a
typical cloud lidar ratio; it is, for example, contained within the
19–25 sr range defined in the CALIPSO data processing algorithm (ASDC,
2013). All non-cloud layers were assigned a value of Sf= 71 sr
(corresponding to the CALIOP class “urban/industrial pollution”, idem, and,
for example, a value that is not far from the value of 59 sr employed by
O'Neill et al. (2012) for volcanic sulfates over Eureka). While aerosols
exhibit a fairly large natural variation in Sf, the chosen value
was found to perform well for most scenes observed at Eureka.
Sensitivity study
To select a value of βthr that does not produce a significant
bias in favor of either clouds or aerosols, we performed a sensitivity study
for all events that were investigated in this study: 21 February 2011, 9 and
10 March 2011 and 13, 14 and 15 March 2012 (the detailed discussion of these
events is presented in Sect. 5). We varied βthr from 1×10-10 m-1 sr-1 (all/most features classified as clouds) to
1×10-3 m-1 sr-1 (all/most features classified as
aerosols) and studied the variation of 〈τx′〉-〈τx〉 and Rx2 (where the angle brackets
“〈〉” indicate an average, the subscript x=f, c or
a, Rx2 is the coefficient of determination and where the averages and
the Rx2 values were evaluated across the duration of the measuring
period). Our sensitivity study was focused more on fine mode aerosols (which,
as discussed above, generally means aerosols in the absence of any
significant presence of coarse mode aerosols) since this is our principle
research motivation and since fine mode aerosol variation is generally more
subtle and difficult to detect in the Arctic.
Backscatter threshold sensitivity (βthr) study for the
9 March 2011 Eureka aerosol event. (a) Top graph: event-averaged lidar and
star photometer AODs as a function of the cloud discrimination threshold
βthr. Middle graph: differences between star photometry and
lidar event-averaged AODs. The vertical dotted lines indicate values of
βthr for which 〈τf〉=〈τf′〉, 〈τc〉=〈τc′〉 and
〈τa′〉=〈τa′〉 respectively. Bottom graph: coefficients of
determination between the lidar and star photometry optical depths across the
duration of the event. The two vertical grey lines indicate
βthr thresholds used in (b); (b) panes 1: star photometry and
lidar fine mode, coarse mode and total AODs (500 nm) corresponding to the
βthr values indicated above the panes; panes 2: CRL 532 nm
backscatter cross-section for the two βthr values; panes 3:
cloud/aerosol classification for the two βthr values.
Illustration using the 9 March case study
Figure 2 illustrates the results of the sensitivity study for 9 March 2011.
The top plot of Fig. 2a shows the fixed star photometer optical depth means
(〈τf〉, 〈τc〉 and
〈τa〉 averages taken across the 9 March measuring
period) and the computed values of 〈τf′〉,
〈τc′〉 and 〈τa′〉 varying as
a function of βthr while the middle plot shows the difference
between these means (note that 〈τf〉 and
〈τc〉 are practically superimposed; the relatively
large value of 〈τc〉, as will be discussed in
Sect. 5.1, is due to thin-cloud contamination). As expected
〈τf′〉→0 when βthr is very small,
and the classification algorithm declares all particles to be clouds while
〈τc′〉→0 when βthr is very large
and the classification algorithm declares all particles to be fine mode
aerosols.
The bottom graph of Fig. 2a shows the different components of Rx2
(τx′ vs. τx) varying as a function of βthr. One
can observe the promising result that both the βthr (
〈τf′〉-〈τf〉=0) zero
crossing (red dotted vertical line of the middle graph) and
βthr(Rf,peak2) are of the same order of magnitude
while noting the more disconcerting result that the rapid variation of
Rf2 implies that the difference is a compromising problem.
However, as discussed in the next section, we can play upon the relatively
large uncertainties in the star photometer and lidar optical depths to define
a large zero crossing region which encompasses the peak in Rf2.
Figure 2b provides insight into the detailed behavior at two critical values
of βthr: a value of 2×10-7 m-1 sr-1
which corresponds to a near zero value of Rf2 and a value of 4×10-7 m-1 sr-1 which corresponds approximately to
βthr(Rf, peak2). The top pane contains total, fine
and coarse mode AODs from the SDA at 500 nm (τa,
τf and τc respectively) and the lidar AODs at
532 nm (τa′, τf′ and τc′
respectively), while pane 2 shows lidar backscatter cross-section profiles at
532 nm. Values of τf′ and τc′ were calculated in
accordance with Sect. 4.1.5 where the binary lidar ratio assignments are
determined from the aerosol/cloud classification of pane 3.
If one compares the τf′ variation of Fig. 2b with the
backscatter profiles and, in particular, the classification panes, it is
clear that the increase in τf′ from left pane 1 to right pane 1
is due to the “gain” of aerosols in the plume located at around 5 km (also
keep in mind that the 〈τf〉 component of the
comparison is fixed). This plume (which we hypothesize, from experience, to
actually be of aerosol nature) is responsible for the right to left increase
in τf′ (from the left hand pane 1 to the right hand pane 1) and
the greater thickness of the plume in the latter part of the day is
responsible for the proportionate (right hand pane 1) increase in
τf′ over that period (compared to the quasi constant value of
τf′ in the left hand pane). This increase across the measurement
period is sufficient to augment Rf2 from a negligible value of
0.02 to a significant value of 0.62 (more details are given in Sect. 5.1). It
is our contention that the most robust arbiter of physical truth is arguably
Rx2 (and Rf2 in the particular case of fine mode
aerosols) because it can show a correlation of independent optical data and
because it is less dependent on calibration and algorithmic artifacts. The
〈τf′〉-〈τf〉 differences
of Fig. 2a are more readily swayed by the relatively large absolute
uncertainties in τf due to calibration and algorithmic
shortcomings as well as the uncertainties in τf′ due to problems
such as errors in the assigned value of Sf and errors in the the
lidar calibration procedure. More details on the types of systematic errors
observed between the components of the lidar and star photometer AODs are
given in Sect. and in the event analysis of
Sect. .
Some comments also need to be added concerning the general behavior of the
Rx2 curves in Fig. 2a. Rc2 remains moderately large and
nearly constant and then drops off for βtrh〉∼1×10-6. This reflects the fact that the backscatter coefficients of what we
believe to be clouds between 7 and 10 km stand out quite distinctly until
their rather large threshold value is surpassed and all samples are declared
to be fine mode aerosols. Beyond this point, the values of Ra2
remain moderately large and constant. Since all backscatter samples have, at
this point, been declared to be fine mode aerosols, the clouds between 7 and
10 km become artificially labeled as fine mode aerosols with an attendant
artificial trend of 〈τf′〉→〈τa′〉 as the cloud particles are progressively attributed an
excessive lidar ratio of Sf=71 sr. It can also be observed in
Fig. 2a that this case of artificially large 〈τa′〉
is characterized by Ra2 values that are identical to
Ra2 values when βthr is very small; the only
differences between the two artificial cases of ostensibly pure fine and
coarse mode cases are the two different values of lidar ratio (and so the
correlation with τa is identical).
(a) βthr ranges (dashed vertical lines) where bands
of 〈τf′〉-〈τf〉,
〈τc′〉-〈τc〉 and
〈τa′〉-〈τa〉 cross
the zero line (horizontal dashed line) for an optical depth error represented
by the semi-transparent bands of red, blue and grey respectively. The diagram
is meant to be a conceptual representation of the how error bar banding would
be applied to real data such as the middle graph of Fig. 2a.
(b) Derived βthr ranges for all events (with red, blue and black
representing, as always, the fine, coarse and total components). The top
graph shows ranges for an assumed optical depth error of 0.03 (Δβthr as per a), while the bottom graph shows ranges for
Rx2>0.19. The end symbols of each horizontal segment:
∘, X, □, ♢, △, • and ▴ represent, respectively,
the event dates of 9 and 10 March and 21 February 2011 and 13, 14, 15 March as well
as the combination of 13–15 March 2012. The grey, dashed vertical line
indicates, the nominal value of βthr=4×10-7 m-1 sr-1
chosen for the event analyses of Sect. 5.
Ranges of optically acceptable βthr
Figure 3a shows a conceptual representation of βthr uncertainty
as a function of a presumed uncertainty in the differences of the means for
each of the three components. In the application of this concept to
〈τx′〉-〈τx〉 plots such as the middle
graph of Fig. 2a, we assumed an error equal to the nominal uncertainty of
0.03 in the star photometer optical depths as per Sect. 4.3.2 and applied it
to all analyzed events to obtain the top graph of Fig. 3b. One can observe
that the βthr ranges of 〈τf′〉-〈τf〉 are clustered near the βthr value
of 4×10-7 m-1 sr-1 represented by the dashed, grey
vertical line. Indeed, for simplicity, we assumed a βthr
nominal value of 4×10-7 m-1 sr-1 for all the case
studies discussed in Sect. 5 below, unless indicated otherwise (we leave the
discussion of the effects of this choice to those case studies). The
clustering of the fine mode βthr ranges, along with the
9 March 2011 illustration of the previous section, suggests in a general
sense, that τf′ as well as τf can, in spite of the
typically stronger variability associated with τc′ and τc, be justifiably associated with the presence of fine mode aerosols
in the atmosphere. Those βthr ranges associated with
〈τc′〉-〈τc〉 and
〈τa′〉-〈τa〉 values that are
large merely reflect a situation where 〈τc′〉 and
〈τa′〉 change little with βthr (the
cloud/aerosol classification changes little with βthr).
The bottom graph of Fig. 3b shows the uncertainty in βthr given
a requirement that Rx2 be greater than 0.19. The threshold of 0.19
was selected in an attempt to broadly quantify a βthr range of
significant Rx2 values for all events: it represents a cutoff whose
probability distribution was significantly different from zero for all events
of the study. One can observe that the positions of
the Rf2 ranges are also clustered near the βthr
value of 4×10-7 m-1 sr-1. The notable exceptions to
this observation are isolated points of higher Rf2 values for
14 March 2012. The former (large βthr) case represents a region
where τc′ is negligible and thus where τf′ is
characterized by Rf2 values that are strongly influenced by
coarse mode variance (when τc is not negligible and there is
every evidence in the behavior of the backscatter profile that
τc′ is artificially low). In the latter (small
βthr) case, τf′ is negligible at such a small
value of βthr, and so the correlation with τf is
optically insignificant (it depends on relatively few, generally noisy
samples of β). Finally, the reasons for the broad βthr
ranges for Rc2 and Ra2 have already been discussed
in the analysis of the 9 March 2011 illustration above.
Lidar optical depth errors
The most significant source of lidar optical depth error in terms of the
discrimination into aerosol and cloud components is arguably the sensitivity
of τf′, τc′ and τa′ to
βthr described in the previous section. One can also question
the rigor of our simplistic aerosol/cloud discrimination algorithm. However,
rather than attempt to seek out an ostensibly better algorithm using such
indicators as the color ratio of two elastic lidar bands, spatial/time
derivatives of β etc., we elected to retain the processing and ease of
interpretation advantages afforded by this standard approach while appealing
to the empirical results of Sect. 5 to justify its choice (in general, we
leave the details related to errors in lidar and star photometer optical
depths to the event based analysis of Sect. 5).
The consequence of using prescribed lidar ratios (Sf and
Sc) for the aerosol and cloud components does merit some particular
reflection because it is readily generalized. An easy-to-demonstrate
consequence of our simplistic aerosol/cloud discrimination algorithm is that
changes in Sf and Sc will not affect the fine and coarse
mode plots of Rx2 vs. βthr. They will, however,
shift the fine, coarse and total AODs, up or down, by simple multiplicative
factors ([Sf/Sf,0]τf,0′, [Sc/Sf,0] τc,0′ and [Sf/Sf,0]τf,0′+[Sc/Sf,0]τc,0′ respectively for an initial set of
prescribed values indicated by the subscript “0”). This will have a
proportionate effect on the 〈τx′〉 as well as the
〈τx′〉-〈τx〉 vs.
βthr curves and, shift the zero crossing position on the
βthr axis by a factor that is readily computed for all events
(from the multiplicative factors and from empirically derived slopes of
dlogβthr/d〈τx′〉). For the fine
mode case this yields changes in log βthr<∼0.2 for relative changes in Sf of 20 % (verified
for all our analyzed events). This change will, for example, not
substantially effect the relatively small fine mode βthr ranges
observable in Fig. 3b.
Eureka aerosol event, 9–10 March 2011. For a description of each
pane, see the caption of Fig. 2b.
Event analysis
In this section we present a process-level analysis of several events
detected at Eureka during the polar winters of 2010–2011 and 2011–2012.
With the exception of Sect. 5.5, star photometry data were not cloud screened
because of the objective to include in our analysis the dynamics of AOD
variations in coarse mode dominant events. The latter would be affected by
the cloud-screening algorithm. Unless otherwise noted, all star photometry
AODs are reported at 500 nm, while the CRL backscatter signal refers to the
532 nm channel. All time values are given in UTC.
Zoom of the backscatter profile and the fine mode optical depths
(τf′ and τf) as a function of time on 9 March 2011 (left)
and 10 March 2011 (right). The 9 March case is the βthr=4×10-7 m-1 sr-1
(right hand) case of Fig. 2a with a focus on fine mode optical depth variation. The purple dashed vertical
lines show the approximate limits of where the plume (between 4 and 6 km on
9 March and at around 8 km on 10 March) is at its most optically active. Same
pane description as in Fig. 2b.
Short-term aerosol events (9–10 March 2011)
Figure 4 shows star photometry and lidar data obtained at Eureka between
00:00 on 9 March and 13:00 on 10 March 2011. Considerable atmospheric
complexity during the given time period is manifested by the presence of what
we interpret to be several distinct features: aerosol layers up to 6 km,
tropospheric clouds between 6 and 10 km as well as optically weak polar
stratospheric cloud (PSC) layers above 14 km. In addition, 10 March is
associated with surface-layer ice crystals in the lowest 500 m (discussed in
more detail below). The 〈τf〉 value of approximately
0.06, across the total period, is generally dominated by the low amplitude
backscatter aerosol layers between 1 and 6 km. Aerosol plumes were
especially prominent on 9 March, gradually thinning out towards the end of
the 2-day period. We see that, in general, τf′ overestimates
τf by an average difference ∼0.03. Focusing on the fine
mode variation and shorter term scales during both 9 and 10 March (Fig. 5),
the best correlation between τf′ and τf is achieved
on 9 March (left side of Fig. 5) with an Rf2 value of 0.61. On
both days, we ignored high frequency AOD variations after approximately
10:25, inasmuch as the measurements beyond that time were influenced by the
background scattering signal associated with the rising sun. For 10 March,
the degree of correlation between τf′ and τf is
marginal at best (Rf2 value of 0.18), but the temporal variation
in both τf and τf′ is weak to begin with. We would
argue nonetheless, that both τf′ and τf react (with
a precision ⪅0.01) on both days to the most optically
active
portion of the (presumed) fine mode layer between a few hundred meters above
ground-level to between around 6 km on 9 March to 8 km on 10 March (the
most optically active regions being between the dashed purple lines of
Fig. 5). It should be pointed that the 10 March Rf2 vs.
βthr curve shows a second, marginally significant peak around
5×10-8 m-1 sr-1 in addition to the peak around 4×10-7 m-1 sr-1 (cf. Fig. 3b). The smaller
βthr value represents a βthr region where
τf′ is roughly constant across that time period (virtually all
the plume structure seen on Fig. 5 has been assigned to the cloud class) and
the resulting τf′ variation is <∼0.003. This means that
any correlation between τf′ and τf is likely
influenced, if not dominated by non-physical perturbations of
τf′.
Both examples of Fig. 5 appear to show an
appreciable sensitivity to quite small changes in fine mode aerosol optical
depth as well as a temporal coherence between passive and active
measurements which is rarely if ever reported in the literature. We note
that the PSCs at around 14 km (see also Fig. 4 for
a more general context) are characterized by optical depths that are
significantly less than the tropospheric optical depths and are a minor
influence on this analysis.
Returning to Fig. 4, one can observe that τc′ corresponds
moderately well with τc, especially for the cloud feature in the
first half of 9 March (the RMS difference between τc′ and
τc is 0.04 for the whole period, and 0.03 for 9 March). Of
particular interest are the three coarse mode peaks on 10 March that are
evident in both star photometry and lidar data. The signal enhancements are
due to surface-layer ice crystals and are discussed in more detail in Sect. 5.3.
All of these indicators would tend to confirm our original hypothesis that
both τf′ and τc′ can be approximately
derived from the βthr classification paradigm and that the
estimates are approximately coherent with τf and τc
respectively. The errors in lidar AODs inherent in such a comparison include
the errors associated with the classification criteria, the assigned lidar
ratio values (≈10-20 sr or hence <∼20-40 % error in predicted τf′ or
τc′ values), and artifacts such as the vertical streaks
(banding) observable in pane 2 of Fig. 5 (which we estimate to <∼0.01 in those figures). Those vertical-streak artifacts are due to a low
number of photon counts in the normalization (molecular) region, which makes
it difficult to measure the backscatter coefficient in this region
accurately. The low number of photon counts is the high-altitude (near-tropopause) consequence of requiring a range of minimal aerosol
contamination in the backscatter profile. This error in the normalization
region will propagate downward in the lidar profile. The star photometer
errors include the estimated AOD calibration errors (≈0.03) and SDA
errors (≈10 %).
Same pane description as Fig. 2b but for 13–15 March 2012.
Multi-night aerosol event (13–15 March 2012)
Figure 6 shows, what we suspect to be a multi-night event (low frequency,
τf′ and τf variation across the three nights with
mild peaking on 14 March) as well as an illustration of the difficulties one
encounters in attempting to identify low frequency and low amplitude fine
mode events when there is relatively little temporal variation associated
with the fine mode optical depth. The mixture of aerosol and cloud on
13 March is particularly fraught with difficulties in that the τc′ and
τc signals tend to dominate their fine mode analogues earlier in
the night, while the τc′ vs. τc as well as
τf′ vs. τf results tend to diverge in the latter
part of the night. We found, as part of our βthr sensitivity
study (applied to the entire three-night period), that the latter part of
13 March was a highly sensitive classification period since classification
results changed rapidly with small changes in βthr (due to the
presence of what was likely a mixture of heterogeneous thin cloud and fine
mode aerosols). The result was that our Rf2 vs.
βthr plot showed a sharp maximum similar to the bottom graph of
Fig. 2b, but where the peak was only marginal (as per the Rf2
criterion of Fig. 3b) and below the range of acceptable
〈τf′〉-〈τf〉
differences (Fig. 3b). Thus, while the Rf2 peak suggests that
this might well be a multi-night event, the actual 〈τf′〉-〈τf〉 range seems
to indicate an inconsistency in our criteria. If one argues in favor of the
robustness of the Rf2 criteria (in favor of a multi-night
event), then we would have to appeal to such factors as τf
retrieval errors or the possibility that a simple binary cloud classification
(cloud aerosol/separation) algorithm is, at least in this case, too
simplistic.
Low altitude ice crystals (10 March 2011)
The proper detection of τc, whether it represents coarse mode
aerosols or cloud, is an important test of the performance of the SDA (which
is strongly dependent on the spectral curvature fidelity of the
star photometer optical depths) and of the performance of any cloud screening
algorithm. Figure 7 shows an extract of Fig. 4 for 10 March 2011 with the
lidar data in panes 2 and 3 displayed only for the lowest 2 km. The peaks in
star photometry AODs at 03:25, 06:35 and 09:00 have a clear association in time
with the obvious increase in backscatter coefficient in the lowest 250 m.
Furthermore, the SDA indicates that the observed features are coarse mode
(τc) dominant. While some weak fine mode backscatter layers are
present at the higher altitudes (the relatively strong tropospheric and
weaker stratospheric features of pane 2 in Fig. 4), τc′ is
clearly dominated by the low-altitude features. For the most extreme vertical
profiles between 06:00 and 08:00, the first 250 m can contribute more than
80 % to the column integrated τc′ value (and more than
60 % to the column integrated τa′). The positions of the
peaks in τc′ correspond well in time to those of
τc; the τc′ values at the τc peak
times of 03:25 and 09:00, however, are significantly lower than the
corresponding τc values. At these low altitudes the laser beam is
not entirely within the field of view of the detection optics, so it is
likely that the inconsistencies between τc′ and τc are,
at least in part, related to an incomplete overlap correction. However
this correction is a crude approximation whose uncertainty increases with the
proximity to the ground. It fails to explain why τc′>τc about the 06:35 peak and one must therefore appeal to
additional factors to explain the discrepancy (SDA retrieval errors, errors
in cloud/aerosol classification, etc.). In the case of overlap function
problems, star photometry measurements become particularly relevant given
inherent lidar difficulties at the lowest altitudes.
An altitude and temporal zoom of Fig. 4 for 10 March 2011. The CRL
profiles are shown for the lowest 2 km.
Mid-tropospheric thin clouds (21 February 2011)
Generally, clouds are relatively opaque and strongly attenuate the inherently
weak star radiation. Some types of clouds (such as thin ice clouds, TICs),
however, can be optically thin, while extending vertically for several
kilometers. An example of such a cloud event was observed on 21 February 2011 at
Eureka is shown in Fig. 8 (some aspects of this event were originally
discussed in Ivanescu et al., 2011).
Thin cloud event of 21 February 2011. The description of the top 3 panes
is identical to the description given in Fig. 4. Pane 4 is the CRL
linear depolarization ratio (%). The depolarization ratio data were not
corrected for overlap in the bottom-most 1km.
The optical depth values of pane 1 show a significant variation between 0.2
and 0.8 during the 11.5-h measurement period. The SDA applied to the
star photometry data set shows the dominance of the coarse mode particles which
compose the cloud. The assumption that the coarse mode optical depth
variation can be ascribed to clouds is supported by the CRL data showing
strong backscatter coefficient features in the 3–5 km altitude range. Perhaps
more convincingly, the presence of clouds is confirmed by the high
depolarization ratio values (up to 40–50 %, pane 4),
which are spatially correlated with the high backscatter coefficient values
of pane 2. Such high depolarization ratio values are typical of ice crystal
clouds. The CRL integrated signal associated with cloud features, τc′, shows good correlation (R2=0.78) with the
star photometry coarse mode (τc). τc′ is
nevertheless, somewhat smaller than τc beyond 05:00. The difference
can, at least in part, be due to the prescribed generic lidar ratio of 20 sr
for the clouds. A slightly higher value of Sa=25 sr might be more
appropriate as it would result in better agreement between
τc′ and τc (we would also note that the overlap
function at the relatively high-altitude positions of the clouds is not an
issue). The reader will further note that both τf′ and
τf are relatively stable with realistic values in spite of being
dominated by the coarse mode contributions. The τf values are
around 0.07 until 09:00 and agree closely with those of τf′.
Beyond 09:00 τf rises to the mean value of 0.12, but
τf′ does not undergo a similar change. This discrepancy
might, for example, be associated with the SDA uncertainties, given the
predominantly coarse mode scene and/or errors in the aerosol/cloud
classification scheme employed to retrieve τc′ (in the latter
case, the apparent stability of τf′ seen in Fig. 8, after
around 10:00, could, in actual fact, be a failure of the classification
algorithm to respond to an increased presence of fine mode particles).
Cloud-screening illustration for the 10 March 2011 low-altitude ice
crystal event. Pane 1: cloud points (green circles) that would be eliminated
based on a temporal cloud-screening algorithm (see text for details); fine
mode star photometry AOD (500 nm) is reproduced for ease of comparison. The
description of panes 2–4 is as in Fig. 2b.
Example of cloud screening
We examined the performance of temporal cloud screening for several case
studies and present one of the more instructive cases in this section.
Figure 9 shows the results of applying the temporal filters of Table 3 to the
AOD time series on the 10 March 2011 low-altitude crystal event of Fig. 7
(filter 1, the optical depth upper limit condition, is not applicable to this
case since all AODs are smaller than 0.35).
As established in Sect. 5.3, the AOD peaks centered at 03:25, 06:35 and 09:00
are due to surface layer ice crystals in the lowest 500 m. Our goal was to
evaluate the performance of the temporal cloud screening (and effectively
extend the definition of “cloud” to include these low lying ice crystals).
Pane 1 shows points that were classified by the cloud filters as cloud
contaminated by abrupt high-frequency temporal variations (the green “CldScr”
circles). For this date, the filters appeared to perform relatively well in
flagging the optical depths associated with the coarse mode peaks
(τc′ of pane 2). The remaining points of the black AOD curve of
pane 1 (which in principle are associated only with aerosol signal) should be
comparable to the fine mode red curve of pane 2 (which is reproduced in pane 1
for ease of comparison); the RMS difference between the two improved from
0.07 to 0.03 without and with cloud filtering. One could go a step further
and argue that the systematically greater AODs, after cloud screening,
contain a small OD contribution due to spatially homogeneous coarse mode
aerosols and/or spatially homogeneous clouds that the temporal cloud
screening algorithm failed to eliminate. This argument is rendered more
believable because, outside the three peaks of Fig. 9, the amplitude of
τc∼(τa (cloud screened) – τf).
However, one could equally well question the accuracy of
the SDA fine mode retrieval which becomes less accurate for small AODs
(O'Neill et al., 2003).
Summary and conclusions
In this paper, we presented recent progress related to the nighttime
optical depth retrievals of aerosols and clouds using star photometry at the
high Arctic PEARL station. Optical measurements, and specifically AOD
measurements, acquired during the polar winter are scarce compared to the
ensemble of polar summer measurements but nonetheless represent an important
source of information for the development of aerosol optical climatologies,
instrumental intercomparisons, satellite validation (such as CALIOP) and
tie-down points for aerosol/cloud models. In the spring of 2011 and 2012,
the SPSTAR star photometer was operating whenever possible, acquiring AOD
measurements in tandem with the acquisition of vertical profiles from the
CRL Raman lidar.
Star photometry is a relatively new technology that is subject to weak-signal
problems exacerbated in the extreme Arctic conditions. The accuracy of the
derived AODs ultimately depends on the quality of derived calibration values
and other instrumental and environmental factors such as optics degradation
or background field characterization. Given the slowly changing optical air
mass values characteristic of most measurement stars, Langley calibration is
problematic in the Arctic. The SPSTAR was calibrated using differential
two-star measurements. Only points satisfying cloud filtering and measurement
uncertainty criteria were considered for calibration. The quality of the
calibration values (C) was confirmed by studying their evolution throughout
the entire measurement period. The AOD errors due to the spread in the
potential calibration values were estimated to be 0.025. The total error in
AOD, δτaer, was estimated to be δτaer⪅0.03 for an optical air mass of 1.
We note that it would be, as indicated by Eq. (A7), ∼ half
this value for an air mass of 2 (we purposefully avoided this aspect in the
text above because the optical air mass influence on optical depth error is
subject to at least two conflicting factors: the beneficial increase in
line-of-site optical depth as it increases from small values and the increase
in line-of-site optical inhomogeneities, at lower elevation angles).
Short timescale (∼ minutes) process-level analysis of aerosol and cloud
events simultaneously captured in photometric and lidar data are essential
to ensure that extracted extensive (bulk) and intensive (per particle)
optical and microphysical indicators are coherent and physically consistent.
This type of analysis is rarely addressed in the literature and we have found
no measurement series that deal with process-level analysis of polar winter
data sets. Using the star photometry–lidar synergy we have detected and
characterized several distinct events throughout the measurement periods. In
particular, we provided case studies of the following: aerosols (short term aerosol events
on 9 and 10 March 2011, a potential multi-night aerosol event across three
polar nights (13–15 March 2012), ice crystals (10 March 2011) and thin clouds
(21 February 2011). For this analysis, we employed prescribed values of
extinction to backscatter ratio (lidar ratio) values and applied these values
to a simple threshold based classification of the lidar backscatter
coefficient images. In general, the results were encouraging in terms of the
physical coherence between fine and coarse mode star photometry ODs
(τf and τc) and corresponding lidar optical depths
of aerosol and cloud layers (τf′ and τc′).
The best correlation between τf and τf′ was
achieved for an aerosol event on 9 March with an R2 (coefficient of
determination) value of 0.61, while the measurements during the thin cloud
event observed on 21 February 2011 showed the best correlation between τc′
and τc′ (R2=0.78). We also argued that
R2 was the most robust means of comparing lidar and star photometer data
since it was sensitive to significant optico-physical variations associated
with these two independent data sources while being minimally dependent on
retrieval and calibration artifacts. Differences between
τf′ and τf as well as between
τc′ and τc are clearly also useful but are
dependent on such artifacts.
Studying seasonal aerosol trends necessitates cloud-screening
procedures. We have adapted a cloud-screening algorithm for star photometry
applications to help detect cloud-contaminated optical depths based on
high-frequency optical depth variations. In addition, we used SDA-retrieved
fine mode AOD as a means of performing de facto spectral cloud screening and
accordingly, as a means of verifying the quality of temporal cloud screening.
In general, a combination of temporal filters performs well for most cloud
features with cloud-screened optical depths (AOD) being in adequate agreement
with spectrally cloud-screened optical depths (τf). Temporal
cloud screening, nevertheless, predictably fails for low-frequency variations
associated with ice crystals or homogeneous clouds. In this case, spectral
cloud screening has a distinct advantage of not being dependent on the
limitations of employing temporal variations as a means of identifying
clouds. An illustration was given where the overestimate of temporally
cloud-screened AODs relative to SDA-derived fine mode AODs could be
attributed to spatially homogeneous clouds and/or coarse mode aerosols
We conclude by saying that the synergism employed in the present work
enabled the assemblage of evidence for events whose process-level
understanding will inevitably generate greater confidence in star photometer
retrievals as well as star photometer/lidar comparisons and will lead to the
improvement of critical statistics such as multi-year climatologies. Such an
assemblage is non-trivial in a low AOD (low signal to noise) environment
such as the Arctic.