MS

autonomous-systems-summary-2024

Autonomous Systems Summary

1. Braitenberg Vehicles (Chapter 1, 2, 3 + Slides Lecture 1)

  • Definition of Robot: An autonomous system existing in the physical world, able to sense its environment and act to achieve goals.

  • Autonomy vs Teleoperation: Autonomous robots act based on their own decisions, as opposed to teleoperated robots controlled externally.

  • W. Grey Walter's Tortoise: The first modern robot demonstrating biomimetic design, using reactive control and emergent behavior.

    • Biomimetic: Imitates biological creatures/behaviors.

    • Reactive Control: Behaviors are not pre-programmed, showcasing emergent behavior.

    • Photophilic vs Photophobic: Light-loving versus light-fearing.

Main Components of Robots:

  • Physical Body: Embodiment necessary to exist in the world.

  • Sensors: Enable perception of the environment and robot state.

    • Types of sensor data: Discrete (binary), Continuous, Observable, Partially Observable.

  • Effectors & Actuators:

    • Effectors: Devices impacting the robot's environment (e.g., wheels, legs).

    • Actuators: Mechanisms executing actions/movement (e.g., motors, muscles).

  • Controller: Provides autonomy, processing sensor input, deciding actions, and controlling actuators.

2. Machine Learning (Chapter 21 + Slides Lecture 2)

  • Definition of Learning: Acquiring new knowledge or skills to improve performance.

Types of Learning in Robots:

  • Supervised Learning: Evaluating outputs based on input-output pairs, requires teacher supervision.

  • Unsupervised Learning: Learning patterns in data without explicit output labels, focuses on clustering.

  • Reinforcement Learning: Learning through interaction with the environment based on trial and error.

    • Balances exploration vs exploitation for long-term reward maximization.

Feedback Mechanisms in Learning:

  • Positive Feedback: Rewards actions leading to desired outcomes.

  • Negative Feedback: Penalizes undesirable actions or states.

3. Forgetting and Lifelong Learning

  • Forgetting: Useful to discard outdated information, optimize memory and processing speed.

  • Lifelong Learning: Continuous improvement and adaptation to changes in the environment.

  • Learning from Demonstration: Helps robots learn tasks via imitation of good examples.

4. Genetic Algorithms & Evolutionary Computation (Lecture 3 + Online Material)

  • Genetic Algorithms (GA): Learning approach based on simulated evolution, mutating and recombining solutions.

  • Steps in GAs:

    1. Evaluate initial solutions based on performance.

    2. Interact to generate new solutions.

    3. Repeat until satisfactory solutions are found.

  • Genetic Programming: Evolves computer programs rather than binary strings, using tree structures to represent functions.

Pros and Cons of GA:

  • Pros:

    • Intuitive and applicable to many tasks.

    • Effective for multi-objective optimization.

  • Cons:

    • No convergence guarantees in finite time.

    • High computational expense for evaluations.

5. Cellular Automata and ROS (Lecture 4 + Online Material)

  • Artificial Life vs Artificial Intelligence:

    • AL: Focused on simulating real-life organisms and phenomena through simple rules and interactions.

    • AI: Aims to create general intelligence, often employing top-down approaches.

  • Evolutionary Computing: Incorporates elements from both AL (GA, CA) and AI.

6. Robot Motion and Locomotion (Chapter 4, 5 + Lecture 5)

Motors:

  • DC Motors vs Servo Motors: Energy conversion and positional control for robotic movements.

  • Gears: Modify motor output speed and torque through gear arrangement.

Degrees of Freedom (DOF):

  • Represents the coordinates needed to specify robot motion.

    • Types of DOF: Translational (X, Y, Z), Rotational (Roll, Pitch, Yaw).

Locomotion:

  • Types: Legged (static/dynamic stability).

  • Gait Types: Statically stable (energy inefficient) vs dynamically stable (energy efficient).

7. Robot Control and Architectures (Chapters 11, 13, 15, 16 + Lecture 6)

  • Control Architectures: Principles governing robot control systems, including reactive and deliberative control.

  • Deliberative Control: Involves SENSE -> PLAN -> ACT, focusing on decision-making and optimization but can be slow.

  • Reactive Control: Fast responses with direct sensor-actuator mapping, beneficial in dynamic environments.

Hybrid Control Systems:

  • Combine reactive and deliberative control, allowing for adaptable responses.

    • Behaviour-Based Control (BBC): Use of decentralized representations for flexible response mechanisms.

8. Language Grounding and Learning (Lecture 7 + Online Material)

  • Symbol Grounding Problem: Challenge of assigning meaning to symbols within robotic systems.

  • Types of Symbol Grounding:

    • Physical: Grounding through interaction with real-world objects.

    • Social: Collective negotiation for shared understanding among agents.

9. Sensors (Chapters 7, 8, 9 + Lecture 8)

  • Sensor Types:

    • Proprioceptors: Measure internal states.

    • Exteroceptors: Measure external states.

  • Active vs Passive Sensors:

    • Active sensors emit signals; passive sensors measure without direct interaction.

  • Applications:

    • Complex sensors enable advanced tasks in robotics, such as navigation and obstacle detection through data interpretation.