1/96
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Autonomous Systems
Systems that can operate independently and make decisions without human intervention.
Robot
An autonomous system that interacts with the physical world, can sense its environment, and can act to achieve goals.
Biomimetic
Imitating biological creatures or behaviors.
Photophilic
Having an affinity for light.
Photophobic
Avoiding or being afraid of light.
Sensor
A device that detects and provides information about the environment and the robot.
Actuator
Mechanism that executes actions or movements in response to commands.
Reactive Control
Control mechanism that responds to sensory information without deliberation.
Supervised Learning
A type of machine learning where an external supervisor provides input-output pairs for the system to learn.
Unsupervised Learning
A type of machine learning that identifies patterns from input data without external guidance.
Reinforcement Learning
Learning through trial and error, where an agent interacts with the environment to maximize rewards.
Genetic Algorithms
Search optimization algorithms inspired by natural evolution that mutate and combine solutions.
Cellular Automata
Discrete computational models that evolve over time according to a set of rules.
Emergent Behavior
Complex behaviors that arise from simple interactions between agents.
Degrees of Freedom (DOF)
The minimum number of coordinates needed to fully describe the motion of a mechanical system.
Trajectory Planning
The process of determining a satisfactory path for a robot.
Deliberative Control
Planning control where the robot considers multiple options before acting.
Reactive Control
Control where responses are generated rapidly based on current sensor information.
Hybrid Control
A system that integrates both reactive and deliberative control architectures.
Behavior-based Control (BBC)
A control approach that utilizes a collection of modular behaviors for decision-making.
Symbol Grounding Problem
The challenge of how a system can assign meaning to the symbols it processes.
Physical Symbol System Hypothesis
Theory that proposes that symbols must be grounded in robot-environment interactions.
Proprioceptors
Sensors that perceive internal states of a system.
Exteroceptors
Sensors that perceive external states of a system.
Active Sensors
Sensors that emit their own signals to measure the environment's response.
Passive Sensors
Sensors that measure the physical properties of the environment without emitting signals.
Noise
Unwanted disturbances that affect sensor accuracy and data quality.
Neural Network Learning
A process where robots learn connection weights between nodes in a neural network.
Clustering
Grouping similar data points together in unsupervised learning.
Evolutionary Computation
A computation model that simulates biological evolution processes like natural selection.
Baldwin Effect
Phenomenon where learned behaviors impact evolutionary fitness over generations.
Lamarckian Learning
The idea that an organism's traits can be influenced by its behavior or experiences.
Gait
The manner or pattern of movement of the limbs of an animal or robot.
Static Stability
The ability of a robot to stand without falling over.
Dynamic Stability
Active balancing required to maintain stability during motion.
Command Arbitration
The process of selecting one action among multiple possible actions.
Parameter
A set of adjustable values that define a model or system.
Hyperparameter
Configurable settings that shape the architecture of a neural network.
Fitness Function
A criterion used to evaluate the performance of potential solutions in genetic algorithms.
Crossover
Genetic operator in operations of combining two parent solutions to create offspring.
Mutation
Random alteration of parts of a solution in genetic algorithms to maintain diversity.
Sensor Fusion
Combining data from multiple sensors to produce more accurate information.
Prototyping
Creating preliminary models of a system to test ideas and functionality.
Simulation
Using models to study and analyze the behavior of systems over time.
Noise Filtering
Techniques for reducing noise in sensor data to improve accuracy.
Motion Planning
The process of determining a route or series of movements for a robot.
Collaborative Learning
Learning that occurs when agents work together to enhance each other's performance.
Adaptive Control
Control systems that adjust their parameters based on feedback from the environment.
Locomotion
The method by which a robot moves from one location to another.
Manipulation
The handling and using of objects by a robot.
Task Learning
The process through which robots acquire new skills or knowledge to perform tasks.
Action Selection
The process of determining which action a robot should execute in a given situation.
Search Space
The range of possible solutions in an optimization problem.
Objective Function
A function that quantifies the goal or problem that needs to be optimized.
Model Predictive Control
Approach in control theory to control a process by predicting future behavior.
Simulation-based Learning
Learning that is based on simulating the environment or tasks.
Multi-Agent Systems
Systems composed of multiple interacting agents that can be robots or individuals.
Reinforcement Signal
Feedback used to guide the learning process in reinforcement learning.
Gait Analysis
The study of movement patterns in legged locomotion.
Embodied Intelligence
Intelligence that emerges from the interaction between organisms and their environment.
Learning from Demonstration
A method where a robot learns by observing demonstrations from humans or other robots.
Self-Organization
The process in which a system organizes itself without external guidance.
Emergent Phenomena
Complex patterns or behaviors that arise from simple rules or interactions.
Signal Processing
Techniques used to analyze and manipulate sensor data.
Cross-Validation
A technique for assessing how the results of a statistical analysis will generalize.
Feature Extraction
Process of transforming raw data into a set of usable features for analysis.
Policy Learning
The process of learning a policy, which maps states to actions.
Underfitting
When a model fails to capture the underlying trend of the data.
Overfitting
When a model learns the noise of the training data rather than the intended outputs.
Evolutionary Strategy
A type of evolutionary algorithm focusing on optimizing key parameters systematically.
Minimum Viable Product
A product with just enough features to satisfy early customers and provide feedback.
Task Space
The space in which a robot interacts with its environment to achieve its tasks.
Language Game
Interactions where agents negotiate and develop shared meanings for symbols.
Weight Alignment
Adjusting the strength of associations between words and meanings.
Lexicon
A set of associations between symbols and their meanings constructed by a robot.
Odometry
The use of data from motion sensors to estimate a robot's change in position.
Chaos Theory
A branch of mathematics focusing on systems that are highly sensitive to initial conditions.
Social Learning
Learning that occurs through observation and interaction with others.
Adaptive Learning
Learning that evolves based on the environment and new information.
Noise Robustness
The ability of a system to function well in the presence of noise or inaccuracies.
Sensor Calibration
The process of adjusting sensor readings to align with true values.
Task Representation
The way tasks are defined and understood by a robot.
Learning Efficiency
A measure of how effectively a robot can learn new tasks.
Feedback Loop
A system where outputs are circled back as inputs to influence future outputs.
Trajectory Optimization
Improving the planned path for efficiency or better performance.
Control Algorithms
Procedures followed to control the actions and movements of a robot.
Impact of Learning
The effect that learning has on enhancing a robot's ability to perform tasks.
Clustering Algorithm
A computer method that categorizes data into groups based on similarity.
Learning Curves
A graphical representation of the increase in learning or performance over time.
Mapping
The process of aligning a robot's perception with its environment.
Self-Calibration
The ability of a robot to adjust its parameters automatically.
Motor Control
The regulation of motor functions by the brain to perform movements.
Hardware Integration
Combining hardware components to work reliably within a system.
Open-Loop Control
Control action is independent of the output, without feedback.
Closed-Loop Control
Control action is dependent on the output, using feedback to adjust actions.
Bipedal Locomotion
A form of movement using two legs.
Decision Forest
A machine learning model composed of decisions trees used for making predictions.