SLR1 - Structure and function of the processor

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/23

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

24 Terms

1
New cards

ALU

Performs arithmetic (add, subtract, multiply, divide) and logic (AND, OR, NOT, comparisons).

2
New cards

Control Unit (CU)

Sends out control signals to coordinate the CPU and other components.

3
New cards

Program Counter (PC)

Holds the memory address of the next instruction to fetch.

4
New cards

Accumulator (ACC)

Stores the results of calculations performed by the ALU.

5
New cards

Memory Address Register (MAR)

Holds the address of the memory location being accessed.

6
New cards

Memory Data Register (MDR)

Holds the data being transferred to or from memory.

7
New cards

Current Instruction Register (CIR)

Holds the instruction currently being decoded and executed.

8
New cards

Buses

Set of parallel wires which connect two or more components inside the CPU.

9
New cards

Data Bus

Transporting data and instructions between components and is bi-directional.

10
New cards

Address Bus

Transfers memory addresses from the CPU to other components.

11
New cards

Control Bus

Transfers control signals (e.g., read, write, clock signals) and is bi-directional.

12
New cards

Fetch

PC transmits signal to MAR; address sent on address bus; instruction fetched into MDR; MDR copied into CIR; PC increments.

13
New cards

Decode

Instruction in CIR is decoded by the Control Unit.

14
New cards

Execute

Instruction is carried out; ALU or CU performs the operation; result stored in ACC or memory.

15
New cards

Clock speed

More cycles per second = faster execution of instructions.

16
New cards

Number of cores

More cores = more instructions processed simultaneously.

17
New cards

Cache size

Larger/faster cache = less time spent accessing slow RAM.

18
New cards

Pipelining

Overlaps fetch, decode, and execute stages to increase CPU efficiency.

19
New cards

Problem with pipelining

Branch (jump) instructions can disrupt the pipeline.

20
New cards

Von Neumann architecture

One memory for both instructions and data; they share the same bus.

21
New cards

Von Neumann bottleneck

Instructions and data cannot be fetched at the same time, slowing performance.

22
New cards

Harvard architecture

Separate memories and buses for instructions and data; faster access.

23
New cards

Use of Harvard architecture

Used in embedded systems like microcontrollers.

24
New cards

Modified Harvard architecture

Modern CPUs use separate caches for instructions and data but share main memory.