CMPEN 331 Exam 3 Spring 2025

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/24

flashcard set

Earn XP

Description and Tags

End of Lec 21 + Lec 22-26

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

25 Terms

1
New cards

Direct Mapped Caching

Each memory block is mapped to exactly one block in the cache [(Block address) mod (Blocks in cache)]

2
New cards

Tag

The upper portion of an address that identifies a memory block

3
New cards

Validity bit

Verifies whether or not a block in a cache has data or not

4
New cards

Write-Through Caching

On a data-write hit, update the block in cache and memory

5
New cards

Write Buffer

A queue that holds data while it’s waiting to be written to memory

6
New cards

Write-Back Caching

On data-write hit, only update cache, then writing the modified block back to the memory when the block is replaced

7
New cards

Dirty bit

A bit used for each data cache block in write-back caching, to detect whether or not it requires a write-back to memory when evicted

8
New cards

Cache miss

A request for data in from the cache that cannot be filled as the data isn’t present in the cache; stalls CPU pipeline to grab data from memory

9
New cards

Compulsory miss

A cache miss that occurs when accessing new/empty data

10
New cards

Conflict miss

A cache miss that occurs when accessing a cache location mapped to by a different memory location

11
New cards

Capacity miss

A cache miss that occurs when the cache isn’t big enough to contain all the blocks needed by the program

12
New cards

Block size tradeoff

While larger block sizes take advantage of spatial locality, if the block size is too big compared to the cache size, the miss rate will go up, and miss penalty will also increase

13
New cards

Average Memory Access Time Formula

Hit time + (Memory penalty * Miss rate)

14
New cards

Handling a read cache miss

Stall the pipeline, fetch block from next level of memory, send memory to cache and processor

15
New cards

Handling a write cache miss (Write allocate)

Always write word into the cache updating both tag and data, no stalls

16
New cards

Handling a write cache miss (No write allocate)

Always write directly into the memory without cache allocation (invalidates that cache block)

17
New cards

Cache Size formula

Block count x (Block Size + Tag size + Valid bit size)

18
New cards

Tag Size formula

32 - log(block count x block size (in words) x word size (in bytes))

19
New cards

Block count

Bit count / word size / word count

20
New cards

DRAM Memory Organization

N rows by N column by M-bits; DRAM read/writes an entire M-bit at a time; each M-bit access requires a RAS/CAS cycle

21
New cards

RAS and CAS

Row Access Strobe and Column Access Strobe, two halves of DRAM addresses

22
New cards

Access time

Time between request and when word is read/written

23
New cards

Cycle time

Time between successive read/write requestsBa

24
New cards

Bandwidth

How much data can be supplied per unit time

25
New cards