Material based on work by:
Dr. Marco Wiering
Dr. Matias Valdenegro
Henry Maathuis
Jelle Visser
Ben Wolf
Students should understand basics of:
Components of a robot
Sensors
Effectors and actuators
Controllers
Degrees of Freedom (DOF)
Robot locomotion:
Legged locomotion
Gaits
Wheeled locomotion
Trajectory planning
Outline sensors commonly used in robotics.
Understand differences between simple and complex sensors.
Understand how sensors are used for perception.
Introduce concepts from machine vision.
Show examples:
Face recognition
Emotion recognition
Gesture recognition
Simple Sensors: Provide basic data (1D) without requiring further processing.
Complex Sensors: Provide multidimensional data, necessitating complex processing for utility.
Robots perceive:
Environment (exteroception)
Other agents and actions
Themselves (proprioception)
Importance of perception:
Knowing the state
Knowing possible actions
Reward estimation
Gauging priorities
Active vs Passive Sensors:
Active Sensors: Emit a signal, interact with the environment for measurement. e.g. sonar or ultrasound. Active sensors emit energy
Passive Sensors: Measure without direct interaction (e.g., light sensors).
Simple vs Complex Sensors:
Simple: Basic functions (e.g., switches, light sensors).
Complex: Advanced functions needing processing (e.g., ultrasound, cameras).
Binary output, detects collisions, and motion limits.
Simples sensor
Produces a signal from light exposure via voltage/resistance.
Used in daylight sensing and optical media reading.
Detect linear/angular positions/velocity using potentiometers or pattern observation.
Applications include odometry (wheel position/rotation) and joint position determination.
How do you get rotation from position?
by adding rings with different codes > bit encoding
adding more bits > higher precision
Measures and controls rotational position using:
Rotary encoder sensor
Gearbox
Motor as an actuator
Applications in robotics such as actuating small components.
Measures acceleration using Newton's law (F = ma).
Used for velocity measurements, orientation, and position estimation.
Measures angular position through conservation of angular momentum.
Typically combined with accelerometers for better orientation estimates.
Use sound frequencies for echolocation, inspired by bats and dolphins.
They require processing for meaningful information.
Can be non-precise as the waves bounce of other surfaces thus generating false readings.
Use light beams for measuring reflections, we use phase-shift to measure information.
Generate point clouds with unidirectional laser beams.
Camera Usage: Mimics human vision but with differences:
Resolution non-uniformity.
Processing variance between luminance and color data.
General categories include:
Early vision: Basic image representation.
High-level vision: Further processed data.
Devices like Kinect and Intel RealSense produce RGBD images ( different colours depending on the depth)
Help estimate distances in robotic applications.
Object detection and classification.
Face and emotion recognition aiding human-robot interaction.
Motion vision: Differentiate frame changes for focus.
nowadays using deep neural networks
Classification – what is it?
Object detection – is there something?
Localisation – where is something?
Segmentation -- what belongs to something?
Simplifying vision involves filtering information from images.
Utilizes color tracking and motion detection combined.
Questions for selecting features:
Task specificity
Environment distinctiveness
Sensor availability
Computational intensity
Sensors can be simple/complex, active/passive based on applications.
Vision and sensor fusion necessitate complex processing to derive meaningful features from data.