Comp org 5.1 Memory hierarchy

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/62

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

63 Terms

1
New cards

Memory Hierarchy

A structure composed of multiple memory levels with increasing size and decreasing speed, providing the illusion of a large, fast memory system

2
New cards

Principle of Locality

The idea that programs access a small portion of their address space at any time, enabling performance gains through memory hierarchy

3
New cards

Temporal Locality

Recently accessed data will likely be accessed again soon

4
New cards

Spatial Locality

Data with address proximity to recently accessed data will likely be accessed soon

5
New cards

Block (Line)

The unit of copying within the memory hierarchy, often containing multiple words

6
New cards

Upper Level (of Memory)

Faster, smaller level closer to the CPU that uses more expensive technology

7
New cards

Hit

When requested data is found in the upper level of the memory hierarchy

8
New cards

Hit Ratio

hits ÷ total accesses in the cache or memory level

9
New cards

Miss

When requested data is not present in the upper memory level, requiring a transfer from a lower level

10
New cards

Miss Penalty

Time required to retrieve a block from the lower level of the memory hierarchy

11
New cards

SRAM

A memory type built from 6–8 transistors per bit that does not need refreshing and provides fast access times

12
New cards

Specific to SRAM

Fixed access time to any piece of information

13
New cards

Number of transistors per bit of SRAM

6-8 to ensure information is not disturbed

14
New cards

Standby Mode

Needs only minimal power to retain charge - specific to SRAM

15
New cards

DRAM

Memory built from one capacitor and one transistor per bit that requires periodic refresh due to charge leakage

16
New cards

Reason DRAM Data fades

Capacitors leak charge over time - periodic refresh is the solution

17
New cards

Number of transistors per bit of DRAM

One transistor and one capacitor per bit - high density and cost efficiency

18
New cards

Specific to reading data from DRAM

destructive to capacitor charge

19
New cards

Word-Line

Row line in a DRAM memory array used to access a full row of cells

20
New cards

Bit-Line

Column line in a DRAM array used to transfer bit data during reads and writes

21
New cards

Sense Amplifier

Component in DRAM that detects and amplifies small charge differences to determine bit values, and rewrites data (refresh) after reads

22
New cards

Burst Mode

A DRAM feature that supplies successive words from an opened row with reduced latency

23
New cards

DDR DRAM (Double Data Rate)

DRAM technology that transfers data on both clock edges, doubling bandwidth

24
New cards

QDR DRAM (Quad Data Rate)

DRAM with separate DDR input and output channels for increased bandwidth

25
New cards

NOR Flash

Flash type with random read/write access, used for instruction memory in embedded systems

26
New cards

NAND Flash

Flash type with higher density and block-level access, cheaper per GB, used in USB drives and storage devices

27
New cards

Wear Levelling

A technique that remaps flash memory writes to reduce block wear and extend lifespan

28
New cards

Solid-State Drive (SSD)

A secondary storage device built from flash memory: fast, nonvolatile, and transistor-based with no mechanical parts

29
New cards

What is flash memory

non volatile semiconductor memory

30
New cards

What is the same thing is non volatile memory

Secondary memory in the memory hierarchy - (SSD)

31
New cards

The only memory that doesn’t turn off

Solid State Drive - (SSD)

32
New cards

SSD Access time

Fast Access to all locations regardless of addresses

33
New cards

Cache

A fast memory level between the CPU and main memory that stores frequently accessed data to reduce access time

34
New cards

Casche Hierarchies

Used in modern computers to intelligently store and managed frequently accessed data

35
New cards

main memory for mac computers

flash type

36
New cards

L1 Cache (Level 1)

The fastest cache level, located closest to the CPU and often split into separate instruction and data caches

37
New cards

What removes possibility of MEM structural Hazard between IF&MEM

Split nature of L1 Casche

38
New cards

L2 Cache (Level 2)

A larger, slower cache that focuses on reducing miss rate and sits between L1 and main memory

39
New cards

Typical size of L1 cashe

small

40
New cards

L3 Cache (Level 3)

A large cache shared by multiple CPU cores, reducing L2 miss penalties and bridging the gap to main memory

41
New cards

Direct-Mapped Cache

A cache structure in which each memory block maps to exactly one cache location based on address modulo cache size.

42
New cards

What memory level is not typically found in home computers and servers instead

L3 Cashe

43
New cards

How big is L3 Cashe

2-8 MiB

44
New cards

How many bytes is 1 MiB (Mebibyte)

2^(20) bytes

45
New cards

Direct-Mapped Cache - Block numbers

Powers of two

46
New cards

Direct-Mapped Cache - What part of the addresses it contains

Low-order address bits

47
New cards

Formula for direct map block calculation

(Block Address) modulo (#Blocks in Cashe)

48
New cards

Which associativity has the highest miss rate

Direct map cashe

49
New cards

Tag

The high-order bits of an address stored in a cache entry to identify which memory block the entry corresponds to

50
New cards

Valid Bit

A field that indicates whether a cache block contains valid (1) or invalid (0) data

51
New cards

Fully Associative Cache

A cache in which any memory block can be placed in any cache location

52
New cards

Set-Associative Cache

A cache where each memory block can be placed in a fixed number of locations within a set (e.g., 2-way, 4-way)

53
New cards

Eight-Way Set Associative Cache

A cache divided so each set contains eight possible locations, effectively close to fully associative for small caches

54
New cards

Cache Set

A group of cache lines where a memory block can reside in a set-associative cache, determined by index bits

55
New cards

Replacement Policy

The rule used to decide which block to evict when placing new data into a set (e.g., least recently used)

56
New cards

Oldest Values in Casche

Most common replacement policy used in memory design

57
New cards

Index Field

Middle portion of an address used to select a specific cache set or block entry

58
New cards

Byte Offset

Lower address bits indicating the exact byte within a block (not used to index cache sets)

59
New cards

Address Tag Comparison

The process of comparing stored tag bits with address tag bits to determine a hit or miss in the cache

60
New cards

Specific to only L1 Cashe

Slit nature: composed of two independent cashes (I-Cashe & D-Cashe)

61
New cards

I-Cashe

Located in the L1 Level and contains Instruction memory

62
New cards

D-Cashe

Located in the L1 Level and contains the Data memory

63
New cards

Main Design focus of Memory (Cashe in particular)

Minimize hit time and reduce miss rate