Computer Science - 1.1.1: CPU structure and function

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/38

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

39 Terms

1
New cards

Purpose of the CPU

Process (searching, sorting, calculating and decision making) data.

Interact with the other hardware

Execute instructions from the RAM

2
New cards

What is the fetch-decode-execute cycle? (brief overview)

The order in which data is processed in the CPU

Fetch: next instruction is fetched from the main memory

Decode: instruction is translated into opcode and operand (in the CIR)

Execute: the appropriate opcode is carried out on the operand

3
New cards

What's inside the CPU

Arithmetic Logic Unit: Make logical decisions. Carry out arithmetic operations.

Control Unit: Controls the signals that control the flow of data around the CPU and between components. Decodes instructions.

The registers: Super fast memory inside the CPU

Each register is specialised to its function within the FDE cycle.

4
New cards

What are system buses

Communication channels

5
New cards

What are the different system buses

Address bus, data bus, control bus

6
New cards

Data bus

Bidirectional.

Carries the data from one place to another.

Move data and instructions between the CPU and the main memory (and I/O devices)

7
New cards

Address Bus

carries the memory locations of the data/instructions that need to be fetched or the memory address where the data will be stored.

unidirectional - from the CPU to RAM and other devices

8
New cards

Control Bus

transmits control signals from the control unit to other parts of the processor.

Bi-directional flow

9
New cards

Different registers

program counter (PC)

memory address register (MAR)

memory data register (MDR)

accumulator (ACC)

Current Instruction register (CIR)

10
New cards

Program counter

stores the address of the next instruction to be accessed. The value is sent to the MAR and then the value increments.

11
New cards

Memory Address Register (MAR)

Contains the address of the instruction to be accessed from memory. The address of the instruction is sent from the PC.

12
New cards

Memory Data Register (MDR)

Contains the instruction/data which has been accessed from the memory. If an instruction, it's sent to the CIR.

13
New cards

CIR

current instruction register. Holds the instruction while it is being decoded / executed. used so the instruction is still there in case the MDR is needed in the execute stage.

14
New cards

Accumulator

Temporary storage for data being processed (input or output in the processor)

Stores results of calculations in the ALU.

Stores data which has come from MDR/RAM

15
New cards

Fetch Stage (in detail)

The address on the PC is copies onto the MAR.

PC increments.

Data in the MAR is passed onto the address bus.

Read signal is sent onto the control bus.

RAM copies data from the location specified by the address bus onto the data bus.

Data on the data bus passed into the MDR.

Data copied from the MDR onto the CIR.

16
New cards

Decode Stage (in detail)

Contents in the CIR are split into two parts...

Operation code is first part of

instruction...

...operation code is decoded (so that

CU knows what to do)

Rest of the content is address of data

to be used with the operation / actual

data to be used (if immediate operand

is used)...

...operand is copied to MAR if it is an

address

...operand is copied to MDR if it is data

17
New cards

Execute stage (in detail)

Decoded instruction in executed:

- if there is data to be loaded, the address is sent to the MAR, usual process and then put in the accumulator.

- if data is to be stored, the value from the accumulator is sent to the MDR, across the data bus to be stored in the memory space of the address in MAR (recieved from the decoded instruction in the CIR)

The CIR sends data to the MAR for the address of the new data to be added/subtracted to the value in the ACC.

If the instruction is branch, the comparison takes place in the ALU and the PC changes it's value accordingly.

18
New cards

Instruction sets

All Von Neumann CPUs have an instruction set. This is a list of instructions that the CU understands and knows what to do with. There are only a few simple instructions

19
New cards

Assembly language main commands

knowt flashcard image
20
New cards

Factors Affecting the Performance of the CPU

Clock Speed

Primary and Secondary Storage

Cache

Cores

Pipelining

21
New cards

Clock speed

The amount of FDE cycles that happen a second is determined by the CPU's clock chip.

faster the clock speed the more instructions can be executed per second.

The chip uses a vibrating crystal that maintains a constant rate.

Clock speed is measured in hertz (amount of cycles per second)

22
New cards

Overclocking

Increasing the clock speed of a CPU - more instructions processed quicker.

However, the CPU's get hot the more they work so overclocking is dangerous without proper management (fan). FIRE

23
New cards

Secondary Storage vs Primary Storage (how it affects the CPU performance)

Data held in primary storage (cache and RAM) will be accessed faster because they are physically closer. Secondary storage is further away and connected by cables so it is much slower to access the data. Hence, the CPU will work more efficiently if the data is held in primary storage.

24
New cards

Cache

Cache is an additional memory space for frequently used data and instructions and continuing instructions.

Cache has similar read speeds to the CPU, both of which are much faster than RAM, so it's more efficient for the CPU to work with cache.

Larger the cache, more instructions can be held and accessed faster. If too many are stored there, it could take just as long because it takes time to go through all that data.

25
New cards

Cores

single processors that can carry out their very own fetch-decode-execute cycle, allowing instructions to be processed concurrently.

more data processed in the same time period.

Parallel processing: 2 or more processors handle separate parts of an overall task, completing two instructions of one program at the same time.

Multitasking: Each core can process two different program's instructions at the same time.

26
New cards

Limitations of Cores

Might not necessarily work faster because some programs only allow one instruction to be processed at any one time.

27
New cards

Pipelining

Concurrent processing of multiple instructions.

One instruction can be fetched while the previous is being decoded...

And the one before is being executed.

Increases speed of execution because latency is reduced, the CPU is never idle, and so it's much more efficient.

reduces latency and stops the CPU from being idle.

28
New cards

Pipelining limitations

It only works if the CPU knows what instruction needs to be fetched next. SO in case of branching, pipeline is flushed.

29
New cards

Where is pipelining used in a computer system

Instruction processing (1) - some

processors allow parts of instructions

to be processed (1) without waiting to

complete the whole instruction cycle

(1).

Pipes to pass data between programs

(1) from programs to peripherals / to

programs from peripherals (1),

example such as | symbol in Unix, or

Popen() or pipe() in C (1).

Graphics pipelines (1) separate

processor renders graphics from data

supplied by other processes (1), parts

(vertices) of the image are pipelined at

the same time as custom software

(shader) that renders the display (1).

30
New cards

Von Neumann Architecture description

All data and instructions sent together over the data bus, stored together in RAM.

Single ALU, Single CU, special registers in the CPU

31
New cards

Where is Von Neumann architecture used?

General purpose computers

32
New cards

Harvard Architecture description

Instructions and data sent separately over different buses and stored in separate parts within the memory.

33
New cards

Where is Harvard Architecture used

specialist computers and embedded systems because there is a fixed instruction size, there's no need for data to be shared between data and instructions and removes the need for secondary storage.

34
New cards

Von Neumann advantages

Only one data bus so less complex for the control unit to manage

Cheaper to develop as the control unit is easier to design

Memory can be used more efficiently as all memory can be used for either instructions or data.

35
New cards

Von Neumann disadvantages

This architecture can only process one instruction / item of data at a time and so cannot carry out parallel processing.

36
New cards

Harvard advantages

Quicker execution as data and instructions can be fetched in parallel.

37
New cards

Harvard disadvantages

Wasted memory is more likely

More expensive to manufacturer and difficult to design and develop

38
New cards

Contemporary architecture

Combination of Harvard and Von Neumann Architecture.

Two separate areas for memory/cache - one for instructions, one for data and data can be accessed concurrently.

Different sets of buses - one for instructions and one from data. Data can be accessed concurrently.

39
New cards

Array processor architecture

Single Instruction Multiple Data (SIMD)

Allows same instruction to operate

simultaneously on multiple data

locations / many ALU's