1/13
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No study sessions yet.
word
smallest addressable unit in memory
block
method of grouping sequential words in memory
number of blocks = number of words / block size
memory address
uniquely identifies a word
block field (block of word) + offset field (word position in block)
exposed hierarchy
programmers decide where to store data
complex load/store instructions for each form of memory in memory hierarchy
when CPU wants to load data it must be specified which memory is being used from the hierarchy as it has full access
implicit memory hierarchy
memory complexities hidden from processor
hardware handles where data goes, in fast or slow memory
processor sees fast memory as long as hardware keeps required data in cache
processor needs to handle different latencies on demand
cache miss
data for address is not in cache
request passed to main memory, high latency
temporal locality: data retained in cache
spatial/sequential locality: request additional close addresses from memory
EAT for hit and miss ratios
without cache, EAT = main memory access
if access is overlapped and cache checked in parallel with main memory, then
EAT = (HR x cache access) + (MR x main memory access)

internal structure of cache (given a request for data in a memory address)
cache checks for a tag that matches memory address to see if it is in cache
if tag found, cache hit so return data
else cache miss
read data at address from memory and return data
select address in cache to store data
direct mapped cache
each block in memory is mapped to one cache block
multiple memory blocks can be mapped to same cache blocks
address has block (no of bits to address all cache blocks) + offset (no of bits to address all words in a block) + tag (left over bits)
1 pro and con of direct mapped cache
search is fast ad only 1 location in cache where an address can be
overly rigid
fully associative cache
main memory blocks can be stored anywhere in cache
address = tag (block of main memory) + offset (position of word within that block)
con of fully associative cache
may have to search entire cache to find tag
if no space in cache to copy data from main memory, select victim block according to replacement policy
set associative cache
cache divided into contiguous equal sized segments
each address maps to only 1 set (main memory block number MOD number of sets)
but can be stored anywhere within that set
address is tag (main memory block in cache) + set (set in cache) + offset (word in block)
3 replacement policies
LRU (least recently used): keeps track of last time each block in cache was accessed
victim block is block unused for longest, complex to track
FIFO: victim block is block in cache the longest
simple, still keeps track
random policy: random block selected as victim block
no overhead but risky