MS

AS-2425-lecture3_sensors

Autonomous Systems Lecture 3 – Simple and Complex Sensors

Introduction

  • Material based on work by:

    • Dr. Marco Wiering

    • Dr. Matias Valdenegro

    • Henry Maathuis

    • Jelle Visser

    • Ben Wolf

Previous Lecture Summary

  • Students should understand basics of:

    • Components of a robot

      • Sensors

      • Effectors and actuators

      • Controllers

    • Degrees of Freedom (DOF)

    • Robot locomotion:

      • Legged locomotion

      • Gaits

      • Wheeled locomotion

    • Trajectory planning

Today's Lecture Goals

  • Outline sensors commonly used in robotics.

  • Understand differences between simple and complex sensors.

  • Understand how sensors are used for perception.

  • Introduce concepts from machine vision.

  • Show examples:

    • Face recognition

    • Emotion recognition

    • Gesture recognition

Sensor Overview

Simple and Complex Sensors
  • Simple Sensors: Provide basic data (1D) without requiring further processing.

  • Complex Sensors: Provide multidimensional data, necessitating complex processing for utility.

Robot Perception

  • Robots perceive:

    • Environment (exteroception)

    • Other agents and actions

    • Themselves (proprioception)

  • Importance of perception:

    • Knowing the state

    • Knowing possible actions

    • Reward estimation

    • Gauging priorities

Sensor Terminology

  • Active vs Passive Sensors:

    • Active Sensors: Emit a signal, interact with the environment for measurement. e.g. sonar or ultrasound. Active sensors emit energy

    • Passive Sensors: Measure without direct interaction (e.g., light sensors).

  • Simple vs Complex Sensors:

    • Simple: Basic functions (e.g., switches, light sensors).

    • Complex: Advanced functions needing processing (e.g., ultrasound, cameras).

Examples of Simple Sensors

Switches
  • Binary output, detects collisions, and motion limits.

  • Simples sensor

Light-sensitive Diode
  • Produces a signal from light exposure via voltage/resistance.

  • Used in daylight sensing and optical media reading.

Position Sensors
  • Detect linear/angular positions/velocity using potentiometers or pattern observation.

  • Applications include odometry (wheel position/rotation) and joint position determination.

  • How do you get rotation from position?

    • by adding rings with different codes > bit encoding

    • adding more bits > higher precision

Servo Mechanism
  • Measures and controls rotational position using:

    • Rotary encoder sensor

    • Gearbox

    • Motor as an actuator

  • Applications in robotics such as actuating small components.

Accelerometers
  • Measures acceleration using Newton's law (F = ma).

  • Used for velocity measurements, orientation, and position estimation.

Gyroscopes
  • Measures angular position through conservation of angular momentum.

  • Typically combined with accelerometers for better orientation estimates.

Complex Sensor Examples

Ultrasonic and Sonar Sensors
  • Use sound frequencies for echolocation, inspired by bats and dolphins.

  • They require processing for meaningful information.

  • Can be non-precise as the waves bounce of other surfaces thus generating false readings.

Laser Sensors (LiDAR)
  • Use light beams for measuring reflections, we use phase-shift to measure information.

  • Generate point clouds with unidirectional laser beams.

Vision in Robots

  • Camera Usage: Mimics human vision but with differences:

    • Resolution non-uniformity.

    • Processing variance between luminance and color data.

  • General categories include:

    • Early vision: Basic image representation.

    • High-level vision: Further processed data.

Depth Sensing

  • Devices like Kinect and Intel RealSense produce RGBD images ( different colours depending on the depth)

  • Help estimate distances in robotic applications.

Machine Vision Use Cases

  • Object detection and classification.

  • Face and emotion recognition aiding human-robot interaction.

  • Motion vision: Differentiate frame changes for focus.

Neural Network- based vision

  • nowadays using deep neural networks

    • Classification – what is it?

    • Object detection – is there something?

    • Localisation – where is something?

    • Segmentation -- what belongs to something?

Feature Extraction and Interpretation

  • Simplifying vision involves filtering information from images.

  • Utilizes color tracking and motion detection combined.

  • Questions for selecting features:

    • Task specificity

    • Environment distinctiveness

    • Sensor availability

    • Computational intensity

Summary

  • Sensors can be simple/complex, active/passive based on applications.

  • Vision and sensor fusion necessitate complex processing to derive meaningful features from data.