CMPSC 311 – Caching & Memory Hierarchy

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/32

flashcard set

Earn XP

Description and Tags

Vocabulary flashcards covering caching fundamentals, memory hierarchy, locality, cache architectures, performance equations, and common caching and writing strategies.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

33 Terms

1
New cards

Cache

A small, fast storage layer that temporarily holds a subset of the data in a larger, slower storage device.

2
New cards

Memory Hierarchy

Organization of storage layers where each faster, smaller level serves as a cache for the next slower, larger level.

3
New cards

Locality

The tendency of programs to access the same or nearby data repeatedly, enabling effective caching.

4
New cards

Spatial Locality

Program behavior where accesses tend to cluster near recently accessed addresses.

5
New cards

Temporal Locality

Program behavior where recently accessed data is likely to be accessed again soon.

6
New cards

L1 Cache

Fastest, smallest on-chip processor cache level; typically ≈100× faster than main memory.

7
New cards

L2 Cache

Intermediate cache level, larger and slower than L1 but still faster than main memory.

8
New cards

L3 Cache

Large, slower cache (often shared across cores) that backs L2; still quicker than DRAM.

9
New cards

Cache Line (Block)

Fixed-size unit of data transferred between memory and cache.

10
New cards

Cache Hit

Event when requested data is found in the cache.

11
New cards

Cache Miss

Event when requested data is not in the cache and must be fetched from the next memory level.

12
New cards

Cold (Compulsory) Miss

Miss that occurs because the data has never been loaded into the cache.

13
New cards

Capacity Miss

Miss that occurs when the cache cannot contain the working set of a program.

14
New cards

Conflict Miss

Miss arising when multiple blocks compete for the same cache line(s) in direct-mapped or set-associative caches.

15
New cards

Fully Associative Cache

Placement policy where any memory block can be stored in any cache line.

16
New cards

Direct-Mapped Cache

Placement policy where each memory block maps to exactly one cache line, typically by index mod number-of-lines.

17
New cards

Set-Associative Cache

Placement policy where each block can reside in any line within a small set; e.g., 4-way allows four possible lines.

18
New cards

Placement Policy

Rule that decides where an incoming block can be placed in the cache.

19
New cards

Replacement Policy

Rule that selects which cache block to evict on a miss when the cache (or set) is full.

20
New cards

Least Recently Used (LRU)

Replacement policy that evicts the cache line that has not been accessed for the longest time.

21
New cards

Least Frequently Used (LFU)

Replacement policy that evicts the line with the fewest accesses.

22
New cards

First-In First-Out (FIFO)

Replacement policy that evicts lines in the same order they were brought into the cache.

23
New cards

Hit Ratio

Percentage of memory accesses served by the cache; key measure of cache efficiency.

24
New cards

Miss Penalty

Extra time required to fetch data from the next memory level after a miss.

25
New cards

Average Memory Access Time (AMAT)

hittime + missratio × miss_penalty

26
New cards

Working Set

Set of active memory blocks a program references during a given period.

27
New cards

Cache-Aside (Read-Aside)

Application checks cache; on miss it reads from DB and then writes result into cache.

28
New cards

Read-Through Cache

Cache itself fetches from DB on a miss, returns data to application, and stores it in cache automatically.

29
New cards

Write-Through

Write strategy that updates both cache and backing store simultaneously.

30
New cards

Write-Back

Write strategy that updates only the cache; memory is updated later when the line is evicted.

31
New cards

Write-Around

Write strategy that bypasses the cache and writes directly to main memory.

32
New cards

Miss Rate

1 − hit_ratio; probability that a cache access results in a miss.

33
New cards

Cache Eviction (Victim)

The block removed from cache to make room for a new block.