You are on page 1of 45

Planning Field Data Acquisition for use with

Remotely Sensed Data

In situ (field/lab) reflectance measurements


How/what to measure; calibration targets
Measurement protocols; instruments
Steps in Field Data Collection to Support Image
Analysis
1. Start a GIS Database

– DEMs
– orthophotos
– Aerial imagery
– Vegetation
surveys
– Soils and
hydrology
– Disturbance
information
(fires, roads,
etc.)
Step 2. Reconnoiter the area; sketch major
features and plan measurement locations so that
a statistically significant data set can be acquired
in the time allotted
2. Ground Reference Data (GRD)

 GRD is necessary
to train and validate
photo-interpreted
and remotely
sensed products.
• How accurate is my
map?
 Efficiency
• Value to analyst vs.
feasibility in the
field.
3. Preflight Fieldwork a. Locate places to
collect GCPs:
– Visible in imagery
– Buildings
– Intersections
– Structures

b. Locate image
calibration targets
– Large areas
– Homogeneous
Pixel location
depends on
– Dark and light targets
accuracy of GPS – Located throughout
study area
In situ Measurements
in support of airborne or
4. Remotely sensed data satellite data
must usually be calibrated :
• Corrected geometrically (x,y,z) and
radiometrically (e.g, to % reflectance); --
allows comparison of remotely sensed
data from different dates and
quantitative analysis.

2) Data calibrated to ground data (e.g.,


field measured leaf area index (LAI),
biomass), cultural characteristics (e.g.,
land use/cover, population density).

In situ spectrometer
measurement
5. Spectral measurements calibrated
against a “white” reference standard.
NBS standard SpectralonTM panel used to calibrate
images to reflectance in 4000-2500nm range

Transects across a
parking lot are
measured for
invariant targets;
used for 2nd stage
RT calibration
improvement
5.a. Spectralon: Approved NIST Standard

99% White Reflecting Panels Gray panels of known reflectance

Rare Earths calibrate wavelength positions


5.b. Choose a large, bright, open feature, with a
uniform surface texture & color. Light color is best
but use what nature provides.

Collect Location: Spatial resolution of your imagery


will determine the positional accuracy necessary.
6. Image Calibration Approach
Top of Atmosphere Radiance
1000

900

800
Digitized Numbers (DN)

700
600

500
Measured Data

2nd stage calibration with Noise Reduction Using Field


Spectra
400
300

200
field data
100

0
0 16 32 48 64 80 96 112 128 144 160 176 192 208 224
channel (#)

Instrument Calibration to
25
Radiance
Radiance (µW/cm^2nmsr)

20

15 Calibrated Radiance

10

0
400 700 1000 1300 1600 1900 2200 2500
Wavelength (nm)

Radiative Transfer Atmospheric 1.00

calibration (ACORN, FLAASH)


1.00
0.90
0.90
0.80
Surface Spectral Reflectance 0.80 Derived Surface Spectral Reflectance
0.70 0.70
Reflectance

Reflectance
0.60 0.60

0.50 0.50

0.40 0.40

0.30
0.30
0.20
0.20
0.10
0.10
0.00
0.00 400 700 1000 1300 1600 1900 2200 2500
Wavelength (nm)
400 700 1000 1300 1600 1900 2200 2500
Wavelength (nm)

Spectrum derived from field data Final calibrated image


Reflectance calibrated spectrum
7. Sampling Approach Collect field data from site
relevant for study:
Examples of typical data
collection:
- % cover
- Plant height & Spacing
- Leaf Area Index (LAI)
- Physiological status or
cover type/extent

Locations where sample data


5 community types collected
Coastal Scrub-iceplant
Intact Coastal Scrub
Chaparral-iceplant
Vandenberg Air Force Base Intact chaparral
Pampas-chaparral
8. When to Collect
Depends on the time scales over which your
system varies.
• Contemporary with image data - Rapidly
changing variables; e.g., process information:
water quality, soil moisture, etc.
• Same season as image data - Seasonally
changing variables; e.g., land cover, species
distributions, LAI, etc.
• Within several years of image data -
Stationary, semi-permanent objects; e.g.,
buildings, trees, geologic features, etc.
Field work
9. What to Collect
Collect the same thematic information as
your image product:
• Land use/Land cover • Soil Type
• Biomass • Species Map
• Turbidity • LAI
Additional useful information
• e.g., for vegetation mapping: % cover, patch
dimensions & orientation, photos.
Presence/absence more useful than
presence-only.
Efficiency - Well-designed protocol, avoid
unnecessary information.
10. Typical Field measurements: Plant Water
Potential, an Indicator of Plant Stress
11. Measure Leaf Area Index

LiCOR-PCA
12. Leaf or shoot water content = (Fresh weight – dry
weight )/ dry weight
Rhododendron

Quercus

Mollino

Pinus

Evolution of (a) EWT and dry matter during 1997 (b) FMC for the same period.
Ceccato et al. 2001 Remote Sensing of Environment
13. Field Spectrometry
• Quantitative measurement of
radiance, irradiance, or
reflectance in the field.

• obtain more precise image


analysis and interpretation

• feasibility studies to understand


detectability of materials using
remote sensing

• make material identifications in


the field
13.a. Leafy Spurge Infestation Mapping
Theodore Roosevelt National Park South Dakota

Collecting Training Spectra for Image Analysis


13.b. Collecting spectral library components:
• Identify large patches of monocultures of targets
& native species
• GPS Species/Community Boundaries
• GPS patches of target species

Phragmites australus
14. Capture spatial/condition and spectral
variation within each type, such as “grass”
14.a. Collect enough measurements to understand
the variability: Mean +/- standard deviation of all
dense grass measurements
14.c. Woody material & dry plant litter is often confused
with bare soil. Capture the variability in your samples
14.d. Mean +/- standard deviations
for woody debris and dry plant litter
14.e. “Bare Soil” may be deceiving; be sure to collect
the features you identify. Are these bare soils? What does
the shadow tell you about the quality of this sample?
14.f. Bare soil spectra from a boreal forest in Sweden
14. g. Field spectra can help you classify &
understand your image data

Plant Detritus and Litter Bare soil


15. Sample Units - Size

 Sample unit size should be comparable to


the minimum mapping unit: i.e. 2-3 pixels.

55% grass
30% trees
5% people
100% grass
10% sand

What about the rest of Loses spatial


the pixel? information – Which
pixels are trees, people,
etc?
16. Sample Units – points vs. polygons

 Points
– Difficult to locate individual pixels in image.
+ Higher spatial accuracy.
+ More efficient to sample.
 Polygons
– Increased variability: Difficult to characterize with a
single value which may be inaccurate for individual
pixels.
c.f., loss of spatial information from too large sample
units.
‐ Requires deciding how to delineate polygon.
+ May be readily identifiable in image.
16.a. Sample Units – points

12 tree points
17 not tree points
16.b. Sample Units – polygons

75% trees
0%
trees
100% trees
16.c. Sample Units – polygons

100% trees

75% trees
80% trees 0%
85% trees trees
100% trees
17. Sampling Designs

Systematic
• Costly
• May require
sampling
inaccessible
areas
• Unbiased
• May miss rare
classes
17.b. Sampling Designs

Random
• Costly
• May require
sampling
inaccessible
areas
• Unbiased
• May miss rare
classes
17.c. Sampling Designs Stratified
Random
• Costly
• May require
sampling
inaccessible
areas
• Unbiased
• Includes rare
classes
• Requires
completed class
map
17.d. Sampling Designs

Clustered
Random
• Efficient
• Must be
designed well to
avoid bias,
missing rare
classes, etc.
17.e. Sampling Designs

Drive-by
• Efficient
• Requires
effective
coverage by
road network
• May be
substantially
biased
18. Other Sources of Reference Data
 When you can’t go to the field…
• Existing land cover maps, geologic or soil maps
• Topographic maps
• Plant surveys
• High resolution photographs
• Historical/published data of site
 Inspect the quality of the data closely.
• How were they collected? At what scale?
• Do they have relevant thematic information?
• Are they well distributed throughout the image?
• Well-timed with the image?
• What is the associated error?
19. Validation of image interpretation –
Continuous Data
 Image products with a continuous range of
values include:
• Temperature
• Turbidity
• % cover
 Compare with GRD by regressing the
field-estimated value for all test points
against image-derived values.
 Compare pixel values against
independently collected field data.
 Assess accuracy with R2 or RMS
statistics.
19.a. Validation – Continuous Data
ex: % cover of a noxious weed

y = -0.0016 + 0.0066x
R2 = 0.663
y = 0.06 + .0048x; R2 = 0.368
y = -0.31 + .0105x; R2 = 0.554

y = -0.0606 + 0.0136x
R2 = 0.184

Andrew & Ustin, 2008


19b. Validation – Discrete Classes
Confusion matrices and associated statistics are
used to assess the accuracy of image products
with discrete classes.
Confusion matrices tabulate the classification
results for every ground reference point, typically
with mapped class in the rows and GRD class in
the columns.
Information rich: reveal which errors are more or
less likely. Different error types may have
different costs.
Recommended sample size of ≥50 samples per
class.
20. Confusion Matrix
Ground truth (pixels)
Palo
Class Tamarix Creosote
verde
Ironwood Total

Unclassified 28 81 95 68 272
Tamarix 268 1 25 0 294
Creosote 11 14 9 1 35
Palo verde 4 2 82 0 88
Ironwood 18 0 27 30 75
Total 329 98 238 99 765

Correctly classified pixels occur along the main


diagonal, shown in red.
20.b. Overall Accuracy
Ground truth (pixels)

Palo
Class Tamarix Creosote Ironwood Total
verde
Unclassified 28 81 95 68 272
Tamarix 268 1 25 0 294
Creosote 11 14 9 1 35
Palo verde 4 2 82 0 88
Ironwood 18 0 27 30 75
Total 329 98 238 99 765

Overall Accuracy = # of ground reference points correctly classified


Total # of ground reference points

Overall Accuracy = Σxii = 268 + 14 + 82 + 30 = 394 = 51.6%


N 765 765
20.c. Kappa Coefficient
Ground truth (pixels)

Palo
Class Tamarix Creosote Ironwood Total
verde
Unclassified 28 81 95 68 272
Tamarix 268 1 25 0 294
Creosote 11 14 9 1 35
Palo verde 4 2 82 0 88
Ironwood 18 0 27 30 75
Total 329 98 238 99 765

Overall Accuracy presents high-biased accuracies because it does not correct for
chance agreement. The kappa coefficient takes this into account.
Κ = N Σxii – Σ(rowi total)(coli total) = 765*329 – [294*329 + 35*98 + 88*238 + 75*99] = 0.38
N2 - Σ(rowi total)(coli total) 765*765 – [294*329 + 35*98 + 88*238 + 75*99]

Κ > 0.85  excellent agreement; 0.7 < Κ < 0.85  very good agreement;
0.55 < Κ < 0.7  good agreement; 0.4 < Κ < 0.55  fair agreement…
21. Class Conditional Kappa
Ground truth (pixels)

Class Tamarix Other Total


Tamarix 268 26 294
Other 61 410 471
Total 329 436 765

If you’re most interested in a single class, lump all the others together into an
“other” category to calculate the class conditional Kappa. E.g., for Tamarix:

Κ = N Σxii – S(rowi total)(coli total) = 765*678 – [294*329 + 471*436] = 0.76


N2 - Σ(rowi total)(coli total) 765*765 – [294*329 + 471*436]

Κ > 0.85  excellent agreement; 0.7 < Κ < 0.85  very good agreement;
0.55 < Κ < 0.7  good agreement; 0.4 < Κ < 0.55  fair agreement…

You might also like