computer systems

0.0(0)
studied byStudied by 0 people
0.0(0)
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/131

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 8:07 PM on 1/14/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

132 Terms

1
New cards

signed magnitude pros

  • simple and intuitive representation

  • easy to convert unsigned values to signed positive numbers (just add 0 as sign bit)

2
New cards

signed magnitude cons

  • sign bit makes it difficult to do arithmetic

  • 2 representations of 0, makes no sense

3
New cards

ones complement pros

easy to perform addition so simple circuits

4
New cards

ones complement cons

2 representations of 0

5
New cards

twos complement pros

  • no special bits so simple circuits

  • single representation of 0

6
New cards

2 reasons why computers use binary

  1. computers use transistors which only have 2 states

  2. efficient for boolean computational logic

7
New cards

decimal to ones complement method

  1. decimal to unsigned binary

  2. extend bits if needed

  3. flip bits

8
New cards

ones complement to decimal method

  1. if MSB if 1, flip bits

  2. unsigned binary to (negative) decimal

9
New cards

decimal to twos complement method

  1. decimal to unsigned binary

  2. if decimal was negative, flip bits + add 1

10
New cards

twos complement to decimal method

  1. if MSB = 1, flip bits + add 1

  2. unsigned binary to decimal

11
New cards

maximum/minimum for unsigned binary

minimum = 00000000 = 0

maximum = 11111111 = 255

12
New cards

maximum/minimum for signed binary

minimum = 11111111 = -127

maximum = 01111111 = 127

13
New cards

maximum/minimum for ones complement

minimum = 1 0000000 = -127

maximum = 0 1111111 = 127

14
New cards

maximum/minimum for twos complement

minimum = 10000000 = -128

maximum = 01111111 = 127

15
New cards

signed magnitude addition method

  • if operands have same sign, add magnitudes and keep bit

  • else large operand - small operand and choose sign bit of larger magnitude operand

16
New cards

signed magnitude subtraction method

  1. rearrange -b to +b by changing sign bit

  2. if operands have same sign post change, add

  3. else large operand - small operand with final answer sign same as large operand

17
New cards

signed multiplication/division method

perform as unsigned, if operands have same signs then keep, else pick from larger operand

18
New cards

ones complement addition overflow

end carry around

19
New cards

twos complement addition overflow

discard excess carry bits

  • if final carry bit is not same as last answer bit, overflow is detected

20
New cards

ones/twos complement multiplication/division method

  1. convert both operands to be positive

  2. perform as unsigned

  3. convert to negative is operands had different signs

21
New cards

IEEE-754 single precision format

32 bits

  • sign bit + 8 bit exponent + 23 bit significand

  • 127 bias

22
New cards

IEEE-754 double precision format

64 bits

  • sign bit + 11 bit exponent + 52 bit significand

  • 1023 bias

23
New cards

IEEE-754 single precision to decimal method

  1. calculate exponent by exponent - bias

  2. add leading 1 to significand (1.significand)

  3. move floating point as per exponent, calculate and add sign

24
New cards

decimal to IEEE-754 single precision method

  1. convert to unsigned binary

  2. normalise and find exponent

  3. set sign bit

25
New cards

0, infinity and not a number special values for IEEE-754

0: 0/1 00000000 000…

infinity: 0/1 11111111 000…

not a number: 0 11111111 any significand

26
New cards

IEEE-754 addition method

  1. adjust floating point so significands have same exponent (2^0)

  2. if different signs, add significands else subtract

  3. renormalise if needed and pick sign

27
New cards

IEEE-754 multiplication method

  1. add decimal exponents

  2. multiply significands

  3. renormalise and set exponent

  4. select sign

28
New cards

IEEE-754 division method

  1. subtract decimal exponents

  2. divide significands

  3. renormalise and set exponent

  4. select sign

29
New cards

overflow/underflow in IEEE-754

overflow: exponent > 128

underflow: magnitude too close to 0 to be distinguished from it

30
New cards

word

smallest addressable unit in memory

31
New cards

block

method of grouping sequential words in memory

  • number of blocks = number of words / block size

32
New cards

memory address

uniquely identifies a word

  • block field (block of word) + offset field (word position in block)

33
New cards

exposed hierarchy

programmers decide where to store data

  • complex load/store instructions for each form of memory in memory hierarchy

  • when CPU wants to load data it must be specified which memory is being used from the hierarchy as it has full access

34
New cards

implicit memory hierarchy

memory complexities hidden from processor

  • hardware handles where data goes, in fast or slow memory

    • processor sees fast memory as long as hardware keeps required data in cache

    • processor needs to handle different latencies on demand

35
New cards

cache miss

data for address is not in cache

  • request passed to main memory, high latency

  • temporal locality: data retained in cache

  • spatial/sequential locality: request additional close addresses from memory

36
New cards

EAT for hit and miss ratios

without cache, EAT = main memory access

  • if access is overlapped and cache checked in parallel with main memory, then

    • EAT = (HR x cache access) + (MR x main memory access)

<p>without cache, EAT = main memory access</p><ul><li><p>if access is overlapped and cache checked in parallel with main memory, then </p><ul><li><p>EAT = (HR x cache access) + (MR x main memory access)</p></li></ul></li></ul><p></p>
37
New cards

internal structure of cache (given a request for data in a memory address)

  1. cache checks for a tag that matches memory address to see if it is in cache

  2. if tag found, cache hit so return data

  3. else cache miss

  • read data at address from memory and return data

  • select address in cache to store data

38
New cards

direct mapped cache

each block in memory is mapped to one cache block

  • multiple memory blocks can be mapped to same cache blocks

  • address has block (no of bits to address all cache blocks) + offset (no of bits to address all words in a block) + tag (left over bits)

39
New cards

1 pro and con of direct mapped cache

  • search is fast ad only 1 location in cache where an address can be

  • overly rigid

40
New cards

fully associative cache

main memory blocks can be stored anywhere in cache

  • address = tag (block of main memory) + offset (position of word within that block)

41
New cards

con of fully associative cache

  • may have to search entire cache to find tag

    • if no space in cache to copy data from main memory, select victim block according to replacement policy

42
New cards

set associative cache

cache divided into contiguous equal sized segments

  • each address maps to only 1 set (main memory block number MOD number of sets)

    • but can be stored anywhere within that set

  • address is tag (main memory block in cache) + set (set in cache) + offset (word in block)

43
New cards

3 replacement policies

  1. LRU (least recently used): keeps track of last time each block in cache was accessed

  • victim block is block unused for longest, complex to track

  1. FIFO: victim block is block in cache the longest

  • simple, still keeps track

  1. random policy: random block selected as victim block

  • no overhead but risky

44
New cards

5 factors to consider when picking an ISA

  • instruction length

  • operands/instruction type

  • memory organisation

  • addressing modes

  • operations available

45
New cards

3 main measures for an ISA

  1. memory space (RAM as expensive)

  2. instruction complexity

  3. number of available instructions

46
New cards

endianness

byte ordering, how data is stored

  • ordering of bytes of data

47
New cards

little endian machines

LSB → MSB

  • flexible for casting

  • 16 to 32 bit integer address doesn’t require any arithmetic

48
New cards

big endian machines

MSB → LSB

  • natural

  • sign of number obvious by looking at byte at address offset 0

  • strings and integers are stored in same order

49
New cards

3 types of register architectures

  • stack architecture

  • accumulator architecture

  • general purpose register architecture

    • tradeoff is between simplicity and cost of hardware, and execution speed and ease of use

50
New cards

stack architecture

instructions and operands taken from stack

  • cannot be accessed randomly

51
New cards

stack machine

use 1 and 0 operand instructions

  • instructions operate on elements on top of stack

  • access instructions are push and pop that need 1 memory address operand

52
New cards

accumulator architecture

1 operand of binary operand is stored in accumulator

  • one operand in memory, creating bus traffic

53
New cards

GPR architecture

registers used instead of memory

  • faster than ACC architecture

  • longer instructions

54
New cards

3 types of GPR architecture

  1. memory-memory: 2/3 operands in memory

  2. register-memory: at least 1 operand in register

  3. load-store: no operands in memory

  • move data to CPU, work with it, move out when done

55
New cards

immediate addressing

data is part of instruction

56
New cards

direct addressing

address of data is given in instruction

57
New cards

indirect addressing

address of address of data is given in instruction

58
New cards

index addressing

uses register as offset (index , for index addressing, base for base addressing)

  • added to address in operand to find effective address of data

59
New cards

register addressing

data located in a register

60
New cards

register indirect addressing

register stores address of address of data

61
New cards

stack addressing

operand assumed to be on top of stack

62
New cards

FDE broken down into instruction level

  1. fetch instruction

  2. decode opcode

  3. calculate effective address of operand

  4. fetch operand

  5. execute instruction

  6. store result

63
New cards

theoretical speed offered by pipeline

  • each instruction is a task T, no. of tasks is N

  • Tp is time pers take, K is number of stages in pipeline

    • for no pipelining, time is NKtp

<ul><li><p>each instruction is a task T, no. of tasks is N</p></li><li><p>Tp is time pers take, K is number of stages in pipeline</p><ul><li><p>for no pipelining, time is NKtp</p></li></ul></li></ul><p></p>
64
New cards

pipeline speed up equation

knowt flashcard image
65
New cards

3 pipeline hazards

  • resource conflicts

  • data dependencies

  • conditional branching

66
New cards

2 parts of a CPU

datapath + control unit

67
New cards

2 parts of a datapath

ALU + registers

68
New cards

purpose of control unit

  • sends signals to CPU components

  • determined based off PC and status register values

69
New cards

what is a word

addressable cell of fixed size

70
New cards

how is RAM measured

length x width

  • length is number of words

  • width is how many bits in a word

71
New cards

2 methods of connecting I/O to CPU

  1. memory mapped: devices act like memory from CPU POV with its own address in memory

  2. instruction mapped: CPU has specific instructions for the device

72
New cards

what is a bus

array of wires simultaneously converting 1 bit along 1 line

73
New cards

2 types of buses

  1. point to point: connect 2 components

  2. multipoint: connect many components (data bus)

74
New cards

functions of data, control and address lines

data line: convey bits from one component to another

control line: choose direction of data flow + when devices can access the bus

address line: choose data source + destination location

  • one way

75
New cards

equation for cpu time to run a program

T = seconds/program = instructions/program x avg cycles/instruction x seconds/cycle

76
New cards

MARIE specs (6)

  • twos complement binary

  • von neumann (fixed word length data + instructions)

  • 16 bit word size, 16 bit instructions (4 opcode, 12 address)

  • 16 bit ALU

  • 7 registers

77
New cards

ACC

holds data to be processed

  • 16 bit

78
New cards

MBR

holds data just read from memory/to be written to memory next

  • 16 bit

79
New cards

IR

holds instruction just before execution

  • 16 bit

80
New cards

MAR

holds memory address of data being referenced

  • 12 bit

81
New cards

PC

holds address of next instruction to be executed

  • 12 bit

82
New cards

IN

holds data read from input device

  • 8 bit

83
New cards

OUT

holds data to be written to output device

  • 8 bits

84
New cards

which registers are the 2 special bus connection in between in MARIE

  • AC + MBR

  • ALU + AC + MBR

85
New cards

load X

loads contents of address X into AC

  • MAR ← X

  • MBR ← M[MAR]

  • AC ← MBR

86
New cards

store X

stores contents of AC at address X

  • MAR ← X + MBR ← AC

  • AC ← MBR

87
New cards

add x

adds contents of address X to AC

  • MAR ← X

  • MBR ← M[MAR]

  • AC ← AC + MBR

88
New cards

subt X

subtracts contents of address X from AC

  • MAR ← X

  • MBR ← M[MAR]

  • AC ← AC - MBR

89
New cards

input

inputs into AC

  • AC ← InREG

90
New cards

output

outputs value in AC

  • AC ← OutREG

91
New cards

skipcond

skips next instruction

  • if IR[11 - 10] = 00 and AC < 0 then

    • PC ← PC + 1

  • else if IR[11 - 10] = 01 and AC = 0 then

    • PC ← PC + 1

  • else if IR[11 - 10] = 10 and AC > 0 then

    • PC ← PC + 1

92
New cards

jump X

load X into PC

  • PC ← X

93
New cards

FDE cycle

  1. fetch from memory

  • address from PC —> IR

  • instruction from MAR —> IR

  • increment PC

  1. decode

  • address put into MAR

  • if operand uninvolved put in MBR

  • else data in MAR —> MBR

  1. exectute

94
New cards

6 step interrupt processing

  1. store register data in memory

  2. look up ISR address in interrupt table

  3. place ISR address in PC

  4. execute ISR instructions

  5. restore registers data

  6. resume FDE

95
New cards

maskable/non-maskable interrupt

maskable: interrupts ignored while another interrupt is running

non-maskable: must be processed to keep system stable ssmblera

96
New cards

assembler process

  • create object program file from source code

  1. assembler assembles as much as possible while building symbol table with memory references for symbols

  2. instructions completed using symbol table

97
New cards

AddI

MAR ← X

MBR ← M[MAR]

MAR ← MBR

MBR ← M[MAR]

AC ← AC + MBR

98
New cards

JnS

MBR ← PC

MAR ← X

M[MAR] ← MBR

MBR ← X

AC ← 1

AC ← AC + MBR

PC ← AC

99
New cards

JumpI

MAR ← X

MBR ← M[MAR]

PC ← MBR

100
New cards

clear

AC ← 0