Bundle Adjustment
An optimization technique in computer vision and photogrammetry to refine 3D reconstruction or camera calibration models by adjusting 3D points and camera parameters simultaneously.
Computer Vision
Focuses on enabling computers to understand, analyze, and interpret visual data from images or videos, involving algorithms for tasks like object recognition and scene understanding.
Dead Reckoning Data
Information obtained through inertial navigation to estimate an object's position, velocity, or orientation by using acceleration, rotation, and time measurements.
Edge Computing
Distributed computing paradigm bringing computation closer to data sources, reducing latency, optimizing bandwidth, and enabling offline operation in applications like IoT and real-time analytics.
Global Map Optimization
Process of improving map accuracy by optimizing landmark positions and camera poses using data from multiple sources for mapping, localization, and navigation systems.
GPS Signal
Radio frequency signals from GPS satellites providing positioning, navigation, and timing information for applications like navigation systems and location-based services.
GPS-Degraded Environment
Situation where GPS signals are compromised, leading to challenges in accurate positioning and navigation, requiring alternative methods for reliable navigation.
GPS-Denied Environment
Location where GPS signals are entirely unavailable, necessitating the use of alternative positioning techniques like inertial navigation or visual-based localization.
Human Pose Estimation (HPE)
Task in computer vision to estimate human body joint positions from images or videos for applications like action recognition and motion capture.
Inertial Measurement Unit (IMU)
Electronic sensor device combining accelerometers, gyroscopes, and sometimes magnetometers to measure object motion in applications like robotics and navigation systems.
Keyframe Selection
Process of choosing specific frames from a video sequence as keyframes to capture important information for tasks like video compression and summarization.
Key Points/Pairs
Distinctive and robust image features used for tasks like image matching and object recognition, extracted using feature detection algorithms.
Light Detection and Ranging (LIDAR)
Remote sensing technology using laser light to measure distances and generate 3D representations of objects or environments in applications like mapping and robotics.
Object Occlusion
Situation in computer vision where objects are partially or entirely obscured, posing challenges in tasks like object detection and tracking.
Odometry Sensor
Device measuring vehicle or robot motion by tracking wheel rotations or speed changes for estimating distance traveled in robotics applications.
Optimization
Process of finding the best solution to minimize/maximize an objective function in computer vision and robotics for refining models and solving complex problems.
Relocalization
Process of estimating a sensor's position within a known map or reference frame by matching observed data with map features for accurate localization.
Rigid Pose Estimation (RPE)
Task of estimating the position and orientation of a rigid object in 3D space for applications like object tracking and augmented reality.
Robot Drift
Cumulative error in the estimated position or pose of a robot over time due to sensor inaccuracies or limitations, affecting positioning and navigation accuracy.
Simultaneous Localization and Mapping (SLAM)
Technique to create a map of an unknown environment while estimating a robot's position within the map using sensor measurements in robotics and computer vision.
Sensor Fusion Model
Integrates data from multiple sensors to improve accuracy and understanding of the environment in computer vision and robotics applications.
Visual Simultaneous Localization and Mapping (vSLAM) Modules
Components within a vSLAM system for real-time mapping and localization using visual information, including Initialization, Local Mapping, Loop Closure, Relocalization, and Tracking modules.
Visual SLAM
A system designed to map the environment around sensors while determining the precise location and orientation of sensors using visual data.
Feature-based vSLAM
A method involving detecting and tracking distinct features like corners or edges across multiple frames of video for mapping and localization.
Direct vSLAM
A technique estimating motion and structure using intensity values of all pixels in the image, suitable for texture-rich environments.
Stereo vSLAM
Utilizes a pair of cameras to calculate depth information for accurate 3D mapping, enhancing the visual data's depth perception.
vSLAM Modules
Consist of Initialization, Local Mapping, Loop Closure, Relocalization, and Tracking, working collaboratively for real-time mapping and localization.
Feature Extraction
Involves detecting key points in images, enhancing feature extraction through preprocessing, and using deep learning or traditional methods.
Localization
Determines the robot's location within the environment by combining feature positions and IMU data over time.
Kalman Filters
Reduce noise and uncertainty in SLAM systems by continually predicting, updating, and refining the model against observed measurements.
Feature Matching
Involves Loop Closure, Relocalization, and Bundle Adjustment to refine the map, re-establish the camera's position, and minimize reprojection errors.
Keyframe Selection
Observations capturing a good representation of the environment, enabling efficient feature points or larger maps in vSLAM systems.
RGB-D SLAM
A technique integrating RGB-D cameras with depth sensors to estimate and build models of the environment, particularly efficient in well-lit indoor environments.
HPE (Human Pose Estimation)
Crucial for rescue robots to determine if victims need immediate assistance based on poses and physical distress signals.
2D vs 3D
One estimates body parts in two dimensions, while another estimates body parts in three dimensions, requiring understanding of depth from the camera.
Occlusion in HPE
Challenge where occluded components' poses are estimated based on visible limbs, edge lengths, and temporal convolution, affecting accurate pose identification.
Dynamic Environments in vSLAM
Challenge where vSLAM struggles in dynamic scenes due to moving objects, leading to substantial errors in map points and pose matrix.
Semantic Segmentation
Technique to differentiate static and dynamic features in vSLAM by using HPE to identify and ignore moving objects for more accurate mapping.
Map Management
Approach to update maps in dynamic environments by dividing them into chunks and using probabilistic mapping techniques to adapt to changing environments.
Ethical Concerns in Rescue Robotics
Core themes include fairness, discrimination, labor replacement, privacy, responsibility, safety, and trust in the context of rescue robot operations.
Fair Distribution
Ensuring hazards and benefits are equitably shared among subjects to prevent some incurring costs while others enjoy benefits, crucial in scenarios like search and rescue robot systems.
False Expectations
Stakeholders often misjudge the capabilities of rescue robots, leading to overestimation or underestimation, potentially resulting in unjustified reliance or underutilization of resources.
Labor Replacement
Concerns arise about the prediction of rescue robots replacing human operators in high-risk missions, potentially impacting victim contact, situation awareness, and medical support.
Privacy Concerns
The use of robots in disaster scenarios can compromise personal privacy by increasing information gathering, necessitating strict control over data usage for rescue purposes only.
Responsibility Assignment
Challenges emerge in determining responsibility in case of technical failures or harm caused by robots, especially when robots operate autonomously or have self-learning capabilities.
Safety Risks
Deploying rescue robots involves balancing safety priorities against other values, as robots can introduce new risks such as malfunctions, collisions with humans, or impacting victims' well-being.
Trust in Autonomous Systems
Trust in autonomous systems is crucial, but their unpredictability can hinder confidence, especially in critical situations like disaster scenarios, where human-robot collaboration is essential.