1/59
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
3 ways to get tree heights
photogrammetry
Lidar
InSAR
2 types of sensors in cameras
CCD
CMOS
Bayer Pattern
Checkerboard filter in any camera
Each pixel has all 3 colors but filter allows it to just see 1
brightness value vectors of length 1
50% green, 25% red, 25% Blue
Subtractive Primaries
Yellow, Magenta, Cyan
Mixing RGB with equal intensity
Pigments
minus blue filter
appears yellow ( mix of red and green)
constrains blue light to see R,G, and NIR
false color composite
eliminates Rayleigh scattering
No diffuse illumination (deeper, darker shadows)
what paint color degrades the fastest?
red, it absorbs green and blue light which have more energy
aperture
how big the hole is
smaller aperture → increased depth of field
(squinting to see farther)
shutter speed
how long do you let light come in
shorter can allow us to stop action/eliminate blur
Additive Primaries
RGB
light
Yellow
Red + Green
Larger F-stop
smaller aperture
focal length (f)
distance from the lens at which parallel light rays come to a point
field of view
variable → zoom camera
fixed → can’t zoom change
F-stop (F)
Focal length/diameter
controls depth of field through changing aperture
object distance (o)
distance of camera to what your taking a picture of
image distance (i)
distance between the sensor and the lens
focus
Len’s Makers Equation
(1/F) = (1/o) + (1/i)
far away object → (1/o) goes to zero
→image distance = focal length (no need to focus)
ISO
sensitivity to light
larger → greater sensitivity → more noise
lower → more light needed
SNR
signal to noise ratio
increase signal by getting more photons in
(how wide and long lens is open)
faster shutter speed
to capture something moving fast
to capture form a fast moving plane
Things you can change in a camera
ISO - detector sensitivity
F-stop - depth of field
i - focus
f- field of view (zoom)
t - stop action
Exposure equation
E=(seen brightness*shutter speed)/ (4*F-stop²)
Increased focal length
exposure decreases
wildlife camera
you lose a lot of exposure because f is squared
why you would need a tripod
parallax
apparent displacement in the location of an object caused by a change in view position
finger moving in front of face
closer object → more
need to know flight line
changes with elevation
parallax in human eye
we stop getting it at around 50 feet
control points
provides reference system to known location
endlap
overlap between adjacent photos along flight line
stereo coverage (need at least 50% overlap)
sidelap
overlap between photos in adjacent flight lines
coverage
Lidar
Light Detection and Ranging
active
calculates distances using measured times
uses NIR or green for laser pulses
gives x,y,z locations
can travel through canopy
lidar footprint
area on ground covered by an individual pulse
lower flying height = smaller footprint= finer resolutions
waveform lidar
get intensity of entire waveform
can detect ground with high accuracy
amount of reflected energy
more data
discrete lidar
point clouds
values at peak of waveform
2-5 returns
saves time and space
scanning
push broom
profiling
single beam and going forward
most satellites use this
pulse density/frequency
number of pulses per unit on the ground
fly lower and slower, or more passes for more
lower → less likely to hit top of trees
can lower and still get similar idea for canopy distrubution
lidar intensity
how much NIR is reflected back to the sensor
using amount of photons
can use to illustrate clouds
mosaic
stitched together flightlines
DSM (digital surface model)
put a blanket over an area
elevation of top height
like a DEM but includes both manmade and natural objects
air base (B)
distance plane flew between photos
image cross correlation
easier in well defined areas (road or stream intersections)
can lead to better height quality
H’
flight height above ground
H
flying height above mean sea level
h
elevation of ground surface
scale (S)
changes with elevation
(distance on map/distance on ground)
scale equations
S=d/D=f/H-h=P/B
structure from motion
creating a 3D model from connecting multiple pictures
requires more overlap than photogrammetry
beam divergance
uses mrad to calculate
0.7 mrad at 1000m = 0.7 m footprint diameter
3 ways to collect lidar data
ground
airplane
space
lidar return
energy returned back to sensor
distance formula
d= (travel time x speed of light)/2
ground elevation
altitude - distance
orientation of plane
inertial measurement unit
canopy height model
DSM-DEM
sensor dead time
your sensor has to reset after a pulse
cannot make reading during this time
may prevent you from seeing the ground
leads to underestimations of tree height
segmentation
can combine data types to get structure and function
can determine chlorophyll count, ect…
fires
have peak spectral radiance at around 4 micrometers
lots in Canada, Alaska, and Siberia
fire band
narrow
placed onto sensor of satellite
at around 4 micrometers
can see fires within minutes
differential normalized burn ratio
used to detect fire severity
largest difference between veg and burned area at NIR and SWIR2
SWIR2 is harder to detect due to less energy in EM spectrum
high means high regrowth
low means severe fire
fuel bed
homogeneous unit that will have similar burn characteristics
remote sensing options
government moderate to low resolution satellites
commercial high spatial resolution
crewed aircraft
uncrewed aircraft (drone)
Simple ratio
(NIR/R)
Way to distinguish leaves in an area
larger indicates more healthy vegetation