CSCI 223 Final Exam Review

0.0(0)
studied byStudied by 7 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/153

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 2:30 PM on 11/30/23
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

154 Terms

1
New cards

Volatile Memory

Requires power, faster memory access eg. RAM

2
New cards

Non-volatile Memory(NVM)

Retains information without power, used for long term storage eg. ROM,Flash memory, SSD

3
New cards

2 types of RAM

SRAM and DRAM

4
New cards

SRAM

higher costing than DRAM, faster, and overall better, 4-6 transistors per bit

5
New cards

DRAM

Lower costing than SRAM, overall worse performance, 1 transistor per bit

6
New cards

What form of memory do cache’s use

SRAM(more efficient, more costly)

7
New cards

SDRAM

Synchronous DRAM uses a conventional clock signal, allows reuse of the same row addresses

8
New cards

DDR SDRAM

Double Data-Rate Synchronous DRAM uses double edge clocking which sends two bits per cycle per pin. Standard for most modern computer systems

9
New cards

Solution to CPU Memory Performance Gap

Memory Heirarchy

10
New cards

Locality: software or hardware solution?

Software

11
New cards

Locality

helps with CPU memory performance gap, programs tend to use data and instructions with addresses near or equal to those they have used recently

12
New cards

Memory Performance Gap

a problem where the CPU waits for memory to return data/instruction

13
New cards

Temporal Locality

Recently referenced items are likely to be referenced again in the near future

14
New cards

Spatial Locality

Items with nearby addresses tend to be referenced close together in time

15
New cards

What data type do registers hold

words

16
New cards

What data type do cache’s hold

cache lines

17
New cards

What data type do off chip memory formats hold

pages

18
New cards

Registers and Caches are

on chip

19
New cards

Main Memory, Local Storage, and Remote Storage(cloud) are

off chip

20
New cards

Benefit of 3rd Cache

reduces the miss penalty

21
New cards

words are how many bytes

4 or 8

22
New cards

cache lines are how many bytes

64

23
New cards

pages are how many bytes

4KBytes

24
New cards

Cache Memory

small, fast, SRAM

25
New cards

Cache memories and main memory are particitioned into

blocks called cache line or cache block of equal size

26
New cards

3 Types of Cache Misses

Cold(compulsory), Conflict, Capacity

27
New cards

Cold Cache Miss

occur because the cache is empty at the beginning of program execution

28
New cards

Conflict Cache Miss

Two or more memory locations map to the same cache set. As a result, it may find that the cache set is already occupied by the data from one of the conflicting addresses

29
New cards

Capacity Cache Miss

Occurs when the set of active cache blocks is larger than the cache(when program needs more cache blocks than can fit in the cache)

30
New cards

Direct Mapped Cache

1 cache line per set

31
New cards

E-way set Associative Cache

E cache lines per set

32
New cards

How to determine which cache line to access in an associative/E-way cache

compare tag bits

33
New cards

how to determine what data in a cache line needs to be accessed

offset and data type (short with offset of 0 is index 0 and 1)

34
New cards

Fully Associative Cache

All cache lines in a single set so there is no set index

35
New cards

Which cache is seperated into d-cache and i-cache

L1

36
New cards

d-cache

data cache (half of L1 cache)

37
New cards

i-cache

instruction cache(half of L1 cache)

38
New cards

Is Main memory on or off chip

off chip

39
New cards

What cache miss does a fully associative cache not have

Conflict Miss

40
New cards

The L3 Cache Is

a shared last level cache

41
New cards

L1 access time

4 cycles

42
New cards

L2 access time

11 cycles

43
New cards

L3 access time

30-40 cycles

44
New cards

L1 and L2 cache’s are both

8-way caches

45
New cards

L3 is what type of E-way cache

16-way

46
New cards

Which block is replaced when there are multiple victim candidates

Least Recently Used Block

47
New cards

advantages of splitting L1 cache

helps with locality and allows data and instructions to be sent at the same time

48
New cards

miss rate equation

1-(miss rate)

49
New cards

Hit Time

time it takes to deliver a line from cache to processor

50
New cards

Miss Penalty

the additional required time because of a miss

51
New cards

typical L1 hit time

1-2 clock cycles

52
New cards

typical L2 hit time

5-20 clock cycles

53
New cards

typical miss penalty

50-200 cycles for main memory

54
New cards

Average Memory Access Time Equation

AMAT=Hit time+(miss rate*miss penalty)

55
New cards

99% cache hits is twice as good as 97% T/F

True

56
New cards

3 ways to optimize cache

reduce miss rate, miss penalty, and hit time

57
New cards

advantages of increasing cache block size

reduces miss rate

58
New cards

disadvantages of increasing cache block size

increases miss penalty and conflict/capacity misses if cache is small

59
New cards

advantages of larger cache

reduces capacity misses

60
New cards

disadvantages of larger cache

longer hit time, higher cost and power

61
New cards

advantage of higher associativity

reduce conflict misses

62
New cards

advantage of multilevel caches

reduces miss penalty

63
New cards

cache stride pattern

the distance b/w consecutive accesses eg. Stride-1:A[0]→A[1]→A[2]→A[3]…

64
New cards

key idea of writing cache friendly code

our qualitative notion of locality is quantified through our understanding of cache memories

65
New cards

Writing cache friendly code:

90/10 rule, focus on inner loops of core functions(loop unrolling), minimize misses in inner loops, repeated references to data are good(temporal locality), Stride-1 reference patterns are good(spatial locality)

66
New cards

90/10 rule

90% of execution time is spent on the most costly 10% of the program

67
New cards

Benefits of Virtual Memory

Makes programming much easier, uses DRAM as a cache, simplifies memory management, isolates address spaces(easier memory protection)

68
New cards

Virtual Memory

an array of N contiguous bytes used while compiling programs. programs stored on disk.

69
New cards

T/F Disk is about 10,000x slower than DRAM

True

70
New cards

Enormous page fault penalty for data movement b/w where

main memory and disk

71
New cards

Page Table

an array of page table entries(PTEs) that maps virtual pages to physical pages

72
New cards

Page Hit

physical main memory has a page that CPU requests

73
New cards

Page Fault

Physical main memory does NOT have a page the CPU requests

74
New cards

T/F page fault causes an exception

True

75
New cards

In case of page fault what happens

victim is evicted, and offending instruction is restarted

76
New cards

Why does virtual memory work

Locality

77
New cards

Working Set

a set of active virtual pages that programs tend to access

78
New cards

if (working set size < main memory size)

good performance after compulsory misses

79
New cards

if(working set size > main memory size)

bad performance with capacity misses

80
New cards

worst case for virtual memory locality

Thrashing: performance meltdown where pages are swapped in and out continuously

81
New cards

Key idea of virtual memory

each process has its own virtual address space

82
New cards

T/F Mapping function scatters addresses through physical memory

True

83
New cards

Memory allocation

each virtual page can be mapped to any physical page

84
New cards

T/F a virtual page cannot be stored in different physical pages at different times

False

85
New cards

Mapping virtual pages to the same physical page allows for

multiple processes to access the same code

86
New cards

Steps of address translation for page hit

1: processor sends virtual address to MMU

2-3: MMU fetches PTE from page table in memory

4: MMU sends physical address to cache/memory

5: Cache/memory sends data word to processor

87
New cards

Steps of address translation for page fault

1: Processor sends virtual address to MMU

2-3: MMU fetches PTE from page table in memory

4: Valid bit is zero, so MMU triggers page fault exception

5: Handler identifies victim (and, if dirty, pages it out to disk)

6: Handler pages in new page and updates PTE in memory

7: Handler returns to original process, restarting faulting instruction

88
New cards

TLB

Translation Lookaside Buffer: small hardware cache in MMU that contains complete page table entries(PTEs) for small number of pages

89
New cards

purpose of TLB

speeds up translation by eliminating a memory access for the most used pages

90
New cards

consequence of TLB

if TLB miss, has to access TLB AND main memory

91
New cards

T/F TLB misses are very common

False

92
New cards

Programmer’s view of virtual memory

each process has its own private linear address space that cannot be corrupted by other processes

93
New cards

System view of virtual memory

uses memory efficiently by caching virtual memory pages(efficient because of locality), simplifies memory management and programming, simplifies protection by providing a convenient inter-positioning point to check permissions

94
New cards

Pipeline Speedup Equation

Pipelined Execution Time = Non-Pipelined Execution Time / number of stages

95
New cards

maximum speedup of pipeline

number of stages

96
New cards

3 Hazard types for pipeline

Structural Hazard, Data Hazard, Control Hazard

97
New cards

Structural Hazard

a required resource does not exist or is busy

98
New cards

Data Hazard

needs to wait for previous instruction to complete its data read/write

99
New cards

Control Hazard

Deciding on control-flow action depends on previous instruction

100
New cards

Forwarding(a.k.a. Bypassing)

Use result as soon as it’s computed to help with Data Hazard (requires extra hardware circuit connections)

Explore top flashcards

CMS II Geriatrics: E2
Updated 289d ago
flashcards Flashcards (129)
ap gov
Updated 1020d ago
flashcards Flashcards (55)
Lec 15 Cancer
Updated 678d ago
flashcards Flashcards (22)
100 Infinitivos
Updated 184d ago
flashcards Flashcards (100)
CMS II Geriatrics: E2
Updated 289d ago
flashcards Flashcards (129)
ap gov
Updated 1020d ago
flashcards Flashcards (55)
Lec 15 Cancer
Updated 678d ago
flashcards Flashcards (22)
100 Infinitivos
Updated 184d ago
flashcards Flashcards (100)