CSCI 223 Final Exam Review

studied byStudied by 7 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 153

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

154 Terms

1

Volatile Memory

Requires power, faster memory access eg. RAM

New cards
2

Non-volatile Memory(NVM)

Retains information without power, used for long term storage eg. ROM,Flash memory, SSD

New cards
3

2 types of RAM

SRAM and DRAM

New cards
4

SRAM

higher costing than DRAM, faster, and overall better, 4-6 transistors per bit

New cards
5

DRAM

Lower costing than SRAM, overall worse performance, 1 transistor per bit

New cards
6

What form of memory do cache’s use

SRAM(more efficient, more costly)

New cards
7

SDRAM

Synchronous DRAM uses a conventional clock signal, allows reuse of the same row addresses

New cards
8

DDR SDRAM

Double Data-Rate Synchronous DRAM uses double edge clocking which sends two bits per cycle per pin. Standard for most modern computer systems

New cards
9

Solution to CPU Memory Performance Gap

Memory Heirarchy

New cards
10

Locality: software or hardware solution?

Software

New cards
11

Locality

helps with CPU memory performance gap, programs tend to use data and instructions with addresses near or equal to those they have used recently

New cards
12

Memory Performance Gap

a problem where the CPU waits for memory to return data/instruction

New cards
13

Temporal Locality

Recently referenced items are likely to be referenced again in the near future

New cards
14

Spatial Locality

Items with nearby addresses tend to be referenced close together in time

New cards
15

What data type do registers hold

words

New cards
16

What data type do cache’s hold

cache lines

New cards
17

What data type do off chip memory formats hold

pages

New cards
18

Registers and Caches are

on chip

New cards
19

Main Memory, Local Storage, and Remote Storage(cloud) are

off chip

New cards
20

Benefit of 3rd Cache

reduces the miss penalty

New cards
21

words are how many bytes

4 or 8

New cards
22

cache lines are how many bytes

64

New cards
23

pages are how many bytes

4KBytes

New cards
24

Cache Memory

small, fast, SRAM

New cards
25

Cache memories and main memory are particitioned into

blocks called cache line or cache block of equal size

New cards
26

3 Types of Cache Misses

Cold(compulsory), Conflict, Capacity

New cards
27

Cold Cache Miss

occur because the cache is empty at the beginning of program execution

New cards
28

Conflict Cache Miss

Two or more memory locations map to the same cache set. As a result, it may find that the cache set is already occupied by the data from one of the conflicting addresses

New cards
29

Capacity Cache Miss

Occurs when the set of active cache blocks is larger than the cache(when program needs more cache blocks than can fit in the cache)

New cards
30

Direct Mapped Cache

1 cache line per set

New cards
31

E-way set Associative Cache

E cache lines per set

New cards
32

How to determine which cache line to access in an associative/E-way cache

compare tag bits

New cards
33

how to determine what data in a cache line needs to be accessed

offset and data type (short with offset of 0 is index 0 and 1)

New cards
34

Fully Associative Cache

All cache lines in a single set so there is no set index

New cards
35

Which cache is seperated into d-cache and i-cache

L1

New cards
36

d-cache

data cache (half of L1 cache)

New cards
37

i-cache

instruction cache(half of L1 cache)

New cards
38

Is Main memory on or off chip

off chip

New cards
39

What cache miss does a fully associative cache not have

Conflict Miss

New cards
40

The L3 Cache Is

a shared last level cache

New cards
41

L1 access time

4 cycles

New cards
42

L2 access time

11 cycles

New cards
43

L3 access time

30-40 cycles

New cards
44

L1 and L2 cache’s are both

8-way caches

New cards
45

L3 is what type of E-way cache

16-way

New cards
46

Which block is replaced when there are multiple victim candidates

Least Recently Used Block

New cards
47

advantages of splitting L1 cache

helps with locality and allows data and instructions to be sent at the same time

New cards
48

miss rate equation

1-(miss rate)

New cards
49

Hit Time

time it takes to deliver a line from cache to processor

New cards
50

Miss Penalty

the additional required time because of a miss

New cards
51

typical L1 hit time

1-2 clock cycles

New cards
52

typical L2 hit time

5-20 clock cycles

New cards
53

typical miss penalty

50-200 cycles for main memory

New cards
54

Average Memory Access Time Equation

AMAT=Hit time+(miss rate*miss penalty)

New cards
55

99% cache hits is twice as good as 97% T/F

True

New cards
56

3 ways to optimize cache

reduce miss rate, miss penalty, and hit time

New cards
57

advantages of increasing cache block size

reduces miss rate

New cards
58

disadvantages of increasing cache block size

increases miss penalty and conflict/capacity misses if cache is small

New cards
59

advantages of larger cache

reduces capacity misses

New cards
60

disadvantages of larger cache

longer hit time, higher cost and power

New cards
61

advantage of higher associativity

reduce conflict misses

New cards
62

advantage of multilevel caches

reduces miss penalty

New cards
63

cache stride pattern

the distance b/w consecutive accesses eg. Stride-1:A[0]→A[1]→A[2]→A[3]…

New cards
64

key idea of writing cache friendly code

our qualitative notion of locality is quantified through our understanding of cache memories

New cards
65

Writing cache friendly code:

90/10 rule, focus on inner loops of core functions(loop unrolling), minimize misses in inner loops, repeated references to data are good(temporal locality), Stride-1 reference patterns are good(spatial locality)

New cards
66

90/10 rule

90% of execution time is spent on the most costly 10% of the program

New cards
67

Benefits of Virtual Memory

Makes programming much easier, uses DRAM as a cache, simplifies memory management, isolates address spaces(easier memory protection)

New cards
68

Virtual Memory

an array of N contiguous bytes used while compiling programs. programs stored on disk.

New cards
69

T/F Disk is about 10,000x slower than DRAM

True

New cards
70

Enormous page fault penalty for data movement b/w where

main memory and disk

New cards
71

Page Table

an array of page table entries(PTEs) that maps virtual pages to physical pages

New cards
72

Page Hit

physical main memory has a page that CPU requests

New cards
73

Page Fault

Physical main memory does NOT have a page the CPU requests

New cards
74

T/F page fault causes an exception

True

New cards
75

In case of page fault what happens

victim is evicted, and offending instruction is restarted

New cards
76

Why does virtual memory work

Locality

New cards
77

Working Set

a set of active virtual pages that programs tend to access

New cards
78

if (working set size < main memory size)

good performance after compulsory misses

New cards
79

if(working set size > main memory size)

bad performance with capacity misses

New cards
80

worst case for virtual memory locality

Thrashing: performance meltdown where pages are swapped in and out continuously

New cards
81

Key idea of virtual memory

each process has its own virtual address space

New cards
82

T/F Mapping function scatters addresses through physical memory

True

New cards
83

Memory allocation

each virtual page can be mapped to any physical page

New cards
84

T/F a virtual page cannot be stored in different physical pages at different times

False

New cards
85

Mapping virtual pages to the same physical page allows for

multiple processes to access the same code

New cards
86

Steps of address translation for page hit

1: processor sends virtual address to MMU

2-3: MMU fetches PTE from page table in memory

4: MMU sends physical address to cache/memory

5: Cache/memory sends data word to processor

New cards
87

Steps of address translation for page fault

1: Processor sends virtual address to MMU

2-3: MMU fetches PTE from page table in memory

4: Valid bit is zero, so MMU triggers page fault exception

5: Handler identifies victim (and, if dirty, pages it out to disk)

6: Handler pages in new page and updates PTE in memory

7: Handler returns to original process, restarting faulting instruction

New cards
88

TLB

Translation Lookaside Buffer: small hardware cache in MMU that contains complete page table entries(PTEs) for small number of pages

New cards
89

purpose of TLB

speeds up translation by eliminating a memory access for the most used pages

New cards
90

consequence of TLB

if TLB miss, has to access TLB AND main memory

New cards
91

T/F TLB misses are very common

False

New cards
92

Programmer’s view of virtual memory

each process has its own private linear address space that cannot be corrupted by other processes

New cards
93

System view of virtual memory

uses memory efficiently by caching virtual memory pages(efficient because of locality), simplifies memory management and programming, simplifies protection by providing a convenient inter-positioning point to check permissions

New cards
94

Pipeline Speedup Equation

Pipelined Execution Time = Non-Pipelined Execution Time / number of stages

New cards
95

maximum speedup of pipeline

number of stages

New cards
96

3 Hazard types for pipeline

Structural Hazard, Data Hazard, Control Hazard

New cards
97

Structural Hazard

a required resource does not exist or is busy

New cards
98

Data Hazard

needs to wait for previous instruction to complete its data read/write

New cards
99

Control Hazard

Deciding on control-flow action depends on previous instruction

New cards
100

Forwarding(a.k.a. Bypassing)

Use result as soon as it’s computed to help with Data Hazard (requires extra hardware circuit connections)

New cards

Explore top notes

note Note
studied byStudied by 71 people
908 days ago
4.7(3)
note Note
studied byStudied by 82 people
742 days ago
5.0(3)
note Note
studied byStudied by 241 people
569 days ago
4.8(5)
note Note
studied byStudied by 19 people
985 days ago
5.0(2)
note Note
studied byStudied by 182 people
1265 days ago
5.0(1)
note Note
studied byStudied by 18 people
703 days ago
5.0(1)
note Note
studied byStudied by 6 people
758 days ago
5.0(1)
note Note
studied byStudied by 45114 people
154 days ago
4.8(315)

Explore top flashcards

flashcards Flashcard (90)
studied byStudied by 21 people
660 days ago
5.0(1)
flashcards Flashcard (95)
studied byStudied by 7 people
339 days ago
5.0(1)
flashcards Flashcard (25)
studied byStudied by 26 people
21 days ago
4.3(4)
flashcards Flashcard (71)
studied byStudied by 4 people
693 days ago
5.0(1)
flashcards Flashcard (21)
studied byStudied by 2 people
846 days ago
5.0(1)
flashcards Flashcard (54)
studied byStudied by 5 people
91 days ago
5.0(2)
flashcards Flashcard (119)
studied byStudied by 25 people
724 days ago
5.0(1)
flashcards Flashcard (34)
studied byStudied by 12 people
28 days ago
5.0(1)
robot