memory hierarchy + cache systems

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/13

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

14 Terms

1
New cards

word

smallest addressable unit in memory

2
New cards

block

method of grouping sequential words in memory

  • number of blocks = number of words / block size

3
New cards

memory address

uniquely identifies a word

  • block field (block of word) + offset field (word position in block)

4
New cards

exposed hierarchy

programmers decide where to store data

  • complex load/store instructions for each form of memory in memory hierarchy

  • when CPU wants to load data it must be specified which memory is being used from the hierarchy as it has full access

5
New cards

implicit memory hierarchy

memory complexities hidden from processor

  • hardware handles where data goes, in fast or slow memory

    • processor sees fast memory as long as hardware keeps required data in cache

    • processor needs to handle different latencies on demand

6
New cards

cache miss

data for address is not in cache

  • request passed to main memory, high latency

  • temporal locality: data retained in cache

  • spatial/sequential locality: request additional close addresses from memory

7
New cards

EAT for hit and miss ratios

without cache, EAT = main memory access

  • if access is overlapped and cache checked in parallel with main memory, then

    • EAT = (HR x cache access) + (MR x main memory access)

<p>without cache, EAT = main memory access</p><ul><li><p>if access is overlapped and cache checked in parallel with main memory, then </p><ul><li><p>EAT = (HR x cache access) + (MR x main memory access)</p></li></ul></li></ul><p></p>
8
New cards

internal structure of cache (given a request for data in a memory address)

  1. cache checks for a tag that matches memory address to see if it is in cache

  2. if tag found, cache hit so return data

  3. else cache miss

  • read data at address from memory and return data

  • select address in cache to store data

9
New cards

direct mapped cache

each block in memory is mapped to one cache block

  • multiple memory blocks can be mapped to same cache blocks

  • address has block (no of bits to address all cache blocks) + offset (no of bits to address all words in a block) + tag (left over bits)

10
New cards

1 pro and con of direct mapped cache

  • search is fast ad only 1 location in cache where an address can be

  • overly rigid

11
New cards

fully associative cache

main memory blocks can be stored anywhere in cache

  • address = tag (block of main memory) + offset (position of word within that block)

12
New cards

con of fully associative cache

  • may have to search entire cache to find tag

    • if no space in cache to copy data from main memory, select victim block according to replacement policy

13
New cards

set associative cache

cache divided into contiguous equal sized segments

  • each address maps to only 1 set (main memory block number MOD number of sets)

    • but can be stored anywhere within that set

  • address is tag (main memory block in cache) + set (set in cache) + offset (word in block)

14
New cards

3 replacement policies

  1. LRU (least recently used): keeps track of last time each block in cache was accessed

  • victim block is block unused for longest, complex to track

  1. FIFO: victim block is block in cache the longest

  • simple, still keeps track

  1. random policy: random block selected as victim block

  • no overhead but risky