1/26
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Spatial interpolation methods, definitions and differences between them
Local Interpolation methods: Thiessen polygons, inverse distance weighting, spline (used to create a surface). These interpolation models are based on deterministic mathematical functions.
Problems: Ignoring variability and error associated with environmental measurements as they are often a single snapshot in time.
Ignoring the overall spatial behavior of a given pattern using information from sample data (only looking at local points/nearest neighbor)
Geostatistical Interpolation Methods: developed originally in the gold mining industry based on the empirical work of Daniel Krige.
Geostatistics is based on random processes with dependence. In spatial context, such reliance is called autocorrelation.
Geostatistics includes a body of statistical techniques based on the theory of spatial random processes.
Makes unbiased predictions with minimum and known variance or error.
Variogram Definition
They are used to characterize the spatial autocorrelation across a surface that we have sampled.
Accurate estimates of variograms are needed for reliable prediction by kriging.
Factors impacting accuracy of a variogram model
Factors Affecting Reliability of Experimental Variograms:
Sample size
Sampling interval and spatial scale
Lag interval
Statistical distribution
Anisotropy
Trend
Anisotropy: Experimental Variograms
a variogram where variations can vary from one direction to another, i.e it can be anisotropic
In many instances, the anisotropy (directional changes in data) is such that it can be made isotropic (no directional change) by a simple linear transformation of the spatial coordinates.
e.g precipitation north to south
Kriging process: Geostatistical method
Kriging is the Geostatistical method of prediction.
It is the best linear unbiased predictor: its prediction error variances are minimized.
It is a weighted moving average whose weights depend on the variogram and the configuration of the sample points within the neighborhood of its targets.
The equations for kriging are contained in matrices and vectors that depend on the spatial autocorrelation among the measured sample locations and prediction location.
The biggest strength is in providing the error estimates in its interpolations
The autocorrelation values come from the semivariogram model.
What are the 5 steps to Watershed Delineation?
Creating a depressionless DEM
Flow Direction
Flow Accumulation
Watershed outlet points
Delineating watersheds
Creating a depressionless DEM
A digital model (DEM) free of sinks – a depressionless DEM – is the desired input to the flow direction process.
Sinks are areas of internal drainage, that is, areas that do not drain out anywhere.
The presence of sinks may result in an erroneous flow-direction raster.
Creating Flow Direction
Principle: assigns a flow direction code to each cell, based on the steepest downhill slope as defined by the DEM.
8 possible direction codes indicating the cells towards which the water flows.
Does not work for depression. These have to be filled beforehand.
Using methods:
D8 → most commonly used in watershed algorithm
MFD (multiple flow direction)
D-Infinity (DINF) methods → shares water with all directions.
Creating Flow Accumulation
= creates a raster of accumulated flow into each cell.
Flow Accumulation: Number of cells draining into a given cell along the flow network
Find the stream network
Stream networks can be delineated from a digital elevation model (DEM) using the output from the flow accumulation tool and applying a threshold value.
Stream Order: for identifying and classifying types of streams based on their numbers of tributaries.
Watershed Delineation
Determines the contributing area above a set of cells in a raster.
• Differences between local, focal and zonal functions
Local Functions: the local tools are those where the values at each cell location on the output raster is a function of the values from all the inputs at that location.
→ Arithmetic and Boolean are examples of local functions
Neighborhood functions: Neighborhood tools create output values for each cell location based on the location value and the values identified in a specified neighborhood.
The neighborhood can be of two types: block and focal.
→ block statistics: neighborhoods do not overlap
→ focal statistics: neighborhoods overlap
Zonal Functions:
The zonal tools allow you to perform analysis where the output is a result of computations performed on all cells that belong to each input zone.
Zonal statistics: calculates statistics on values of a raster within the zones of another dataset.
Calculating areas and perimeters using raster data
Area: the area (or surface) represented by each cell consists of the same width and height and is an equal portion of the entire surface.
Perimeter: Estimate by analyzing edge cells or using raster-to-vector conversion for precision.
What is remote sensing?
the art, science, and technology of obtaining reliable information about the physical objects and the environment, through the process of recording, measuring and interpreting imaging and digital representations of energy patterns derived from non contact sensor systems
Benefits of using remote sensing products in environmental sciences
Aerial perspective at global, national/regional, and local scales;
Historical imagery can document change which can help us to understand the human and/or physical processes at work
Obtain knowledge beyond our human visual perception (day/night)l in inclement weather
Information extraction:
3 dimensional terrain characteristics
land-use/land-cover
Biophysical properties
Electromagnetic spectrum
the sun produces a continuous spectrum of energy from gamma rays to radio waves that continually bathe the Earth in energy.
The visible portion of the spectrum may be measured using wavelength (measured in micrometers or nanometers, i.e mm or nm) or electron volts (eV).
all units are interchangeable.
Various resolutions of remote sensing systems
Spatial: the size of the field of view, e.g 10x10m
Spectral: the number and size of spectral regions, e.g. blue, red, thermal
Temporal: how often the sensor collects data
Radiometric: sensitivity of detectors to small differences in electromagnetic energy
Spectral signatures of vegetation versus water
Vegetation has a remarkably high reflection in the near infrared channel 4 and a low reflection in the visible red channel 3. Generally, water only reflects in the visible light range.
The value range of the NDVI is -1 to 1. Negative values of NDVI (values approaching -1) correspond to water. Values close to zero (-0.1 to 0.1) generally correspond to barren areas of rock, sand, or snow.
Basic knowledge of various sensors (multispectral, hyperspectral, thermal, LiDAR)
multispectral=
1. Quantify light reflectance response in multiple discrete bands
2. Includes bands outside of the visible spectrum
3. Commonly NIR (700 - 900nm)
4. Quantify relationships with spectral indices
hyperspectral=
1. Quantify light reflectance
2. Many narrow (< 10 nm) spectral bands
3. Visualize phenomena that are not apparent to the human eye
4. Quantify relationships with spectral indices
thermal=
1. Sensitive to heat
2. Long-Wave IR (LWIR) Spectral Range: ~7500 - 13,500 nm
3. Not in atmospheric window
4. Thermographic
LiDAR=
1. Range finder
2. Complex systems can produce 100,000+ points/sec
3. Precise internal timing and calibration reduces ranging uncertainty
Image classification methods and differences between them
Image classification: is the task of assigning classes to all the pixels in a remotely sensed image.
The output raster from image classification can be used to create thematic maps.
Supervised classification: is where you decide what class categories you want to assign pixels or segments to. These class categories are referred to as your classification schema.
The user selects representative sites for each land cover class in the image. These sites are called training samples.
A training sample has location information (point or polygon) and associated land cover class.
The image classification algorithm uses the training samples to identify the land cover class in the entire image.
Unsupervised classification: is where you let the computer decide which classes are present in your image based on statistical differences in the spectral characteristics of pixels.
Spectral classes are grouped first, based solely on the numerical information in the data, and are then matched by the analyst to information classes (if possible)
Programs, called clustering algorithms, are used to determine the natural (statistical) groupings or structures in the data.
Image Scale
Scale of Aerial Photography - Flat Terrain:
There are 2 methods for determining the scale of aerial photographs:
Ratio between photo distance and ground distance
Computing distances measured on the air air photo (ab) and distances found in the real world (AB)
Scale: can be expressed as:
Verbal scale: e.g 1-in on the air photo equals 2,000 ft in the real world
Representative Fraction (RF) (dimensionless): e.g 1/2000 or 1:24,000
Graphic scale
Large scale photos: small value in the denominator of the reference fraction
Typically: RF 1:20,000
Medium scale photos:
Typically: RF between 1:20,000 and 1:100,000
Small scale photos: Large denominator in RF
Typically: RF 1:100,000
Scale over Variable Relief Terrain:
Infinite number of different scales present due to the different elevation of terrain.
Higher elevations: larger scale (closer)
Lower elevations: smaller scale (farther away)
Usually an average scale is used.
UAVs component
1. the aerial platform, which includes the airframe, the navigation system, the power system, and the payload
2. the ground control station (GCS), which allows the human control from a remote emplacement
3. the communication system
Planning a UAV mission
How large is the area of interest?
What is financially feasible?
How large is the intended camera?
Is the survey collection path accessible or will UAS need to be used?
What additional components are needed and does the field area support these? (i.e batteries, specific weather)
Structure from motion concept
Structure-from-Motion photogrammetry is an emerging technique used to create 3D point clouds with associated color for the area of interest.
The key things to notice here are that the camera needs to continually be moving (no two photos are taken from the same location) and the photographs need to include the same features, so that features are in multiple photographs.
Scientists use this for many applications.
Benefits of SfM:
Low cost photogrammetric method for high resolution topographic reconstruction
Suitable for low budget research
Application in remote areas
It differs fundamentally from conventional photogrammetry, in that the geometry of the scene, camera positions and orientation is solved automatically without the need to specify a priori, a network of targets which have known 3D positions
The new generation of image matching algorithms allow for unstructured image acquisition.
Workflow of structure from motion
Find the same features in multiple overlapping photographs.
Find camera locations and orientation and extract a low density point cloud
Multi-view stereo matching: Takes the sparse point cloud and camera locations to populate a model with more points.
Georectification: converts the point clouds from an internal, arbitrary coordinate system into a geographical coordinate system
Triangulation to estimate the 3D point cloud
Georectification
Georectification: means converting the point cloud from an internal, arbitrary coordinate system into a geographical coordinate system. This can be achieved in one of two ways:
directly: with knowledge of the camera positions and focal lengths
Indirectly: by incorporating a few ground control points (GCPs) with known coordinates. Typically these would be surveyed using differential GPS.
Image preprocessing methods and application
Image preprocessing steps can be classified to be two main groups:
Radiometric preprocessing
Geometric preprocessing
Radiometric Preprocessing: Radiometric preprocessing influences the brightness values of an image to:
Correct for sensor malfunctions
Adjust the values to compensate for atmospheric degradation.
Radiometric Error caused by the sensor:
Radiometric error can be introduced by the sensor system when the detectors do not function properly or are improperly calibrated.