Computer Science IB Robots Case study

0.0(0)
studied byStudied by 1 person
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/24

flashcard set

Earn XP

Description and Tags

For the two thousand and twenty four case study lol mostly definitions

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

25 Terms

1
New cards

GPS-degraded environment

any situation or location where the Global Positioning System (GPS) signals are unreliable, weak, or completely unavailable.

2
New cards

GPS-denied environment

a location or situation where the Global Positioning System (GPS) signals are not available at all.

3
New cards

Why do these environments occur of denied or degraded?

This can occur for a number of reasons: Indoor Locations: Buildings often block GPS signals, making them unavailable inside

4
New cards

Computer vision

is a field of artificial intelligence enabling computers to derive information from images, videos and other inputs.

5
New cards

Odometry sensor

Indicate how far the robot has travelled, based on the amount that the wheels have turned.

6
New cards

Visual simultaneous localisation and mapping

is a technology that allows a device, equipped with cameras, to simultaneously determine its own location (localization) and construct a map of its environment in real-time. By extracting visual features from the surroundings, such as keypoints or landmarks, VSLAM enables autonomous navigation without relying on external signals like GPS, making it particularly useful in environments where GPS signals are unreliable or unavailable. (USING CAMERAS)

7
New cards

SLAM

allows a device to explore and understand its surroundings, figuring out where it is and creating a map as it moves, even in environments where it has no prior knowledge. (ONLY UTILISES SENSORS)

8
New cards

Light Detection and Ranging (LIDAR)

is a technology that uses laser light to measure distances and create a 3D map of the surroundings for a robot's navigation.

9
New cards

Inertial Measurement Unit (IMU)

is a sensor package on a robot that measures acceleration, rotation, and sometimes magnetic fields, providing information on the robot's movement.

10
New cards

Local Mapping

involves a robot creating a detailed map of its nearby surroundings to aid in navigation.

11
New cards

Tracking

is the process by which a robot follows and monitors the movement of objects or people in its environment.

12
New cards

Loop closure

occurs when a robot recognizes a previously visited location, improving the accuracy of its map by closing a loop in its path.

13
New cards

Optimization

involves refining a robot's actions or processes to enhance accuracy and efficiency in navigation and mapping.

14
New cards

Bundle Adjustment

is the optimization process of refining collected sensor data to improve the accuracy of a robot's map.

15
New cards

Keyframe Selection

is the process of choosing significant moments in a robot's journey to represent in its map.

16
New cards

Global Map Optimization

is the refinement of the entire map a robot has created to enhance overall accuracy.

17
New cards

Key Points/Pairs

are important visual features or landmarks used by a robot to understand and remember its environment.

18
New cards

Relocalization

is the process by which a robot determines its position again if it gets lost or loses track.

19
New cards

Robot drift

is a cumulative error that may occur over time, causing a slight deviation between a robot's estimated and actual positions.

20
New cards

Dead Reckoning Data

is information used by a robot to estimate its current position based on its previous known position and movements.

21
New cards

RPE (Rigid Pose Estimation)

help a robot determine the relative positions of itself and other objects in the environment.

22
New cards

HPE (Human Pose Estimation)

models help a robot understand the positions and movements of humans in its environment.

23
New cards

Multiple-Object Occlusion

occurs when several objects are in the robot's view, with some partially blocking others.

24
New cards

Edge Computing

involves a robot processing and making decisions closer to where it collects data, reducing the need for central processing.

25
New cards

Sensor fusion models

is the integration of information from different sensors, such as cameras and LIDAR, to obtain a more complete understanding of the environment.