6. Pipelining & Parallelism

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/26

flashcard set

Earn XP

Description and Tags

Flashcards based on lecture material, created by Gemini 3 Pro.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

27 Terms

1
New cards

What is Pipelining?

A technique where the execution of an instruction is broken down into smaller subtasks (stages) that are overlapped in parallel. It increases throughput by allowing multiple instructions to be in different stages of execution simultaneously.

2
New cards

Does pipelining improve Latency or Throughput?

It improves Throughput (the amount of work done in a given time). It does not reduce the Latency (execution time) of a single instruction; in fact, latency might slightly increase due to overhead.

3
New cards

What is the theoretical speedup of a pipeline?

Ideally, the speedup is equal to the number of pipe stages, assuming the pipeline is kept full and there are no stalls.

4
New cards

What are the 5 stages of the standard MIPS pipeline?

  1. IF (Instruction Fetch)

  2. ID (Instruction Decode & Register Read)

  3. EX (Execute operation or calculate address)

  4. MEM (Access Memory operand)

  5. WB (Write Back result to register)

5
New cards

What is a Pipeline Hazard?

A situation that prevents the next instruction from executing in the next clock cycle, causing a potential stall or conflict.

6
New cards

What are the three types of Pipeline Hazards?

  1. Structural Hazards

  2. Data Hazards

  3. Control Hazards

7
New cards

Define a Structural Hazard.

A hazard that occurs when the hardware cannot support a specific combination of instructions simultaneously (e.g., two instructions trying to access memory at the exact same time).

8
New cards

Define a Data Hazard.

A hazard that occurs when an instruction depends on the result of a previous instruction that is still in the pipeline and not yet complete (e.g., Read-After-Write dependency).

9
New cards

Define a Control Hazard.

A hazard arising from the pipelining of branches (decisions) and other instructions that change the program counter. The pipeline doesn't know which instruction to fetch next until the branch decision is resolved.

10
New cards

What is a 'Stall' or 'Bubble'?

A delay introduced into the pipeline (often effectively a 'No-Operation' or NOP) to wait for a hazard to resolve, reducing throughput.

11
New cards

What is Forwarding (Bypassing)?

A hardware solution for Data Hazards where extra circuitry passes the result from the ALU (Execute stage) directly to the input of the next instruction, without waiting for the result to be written back to the register file.

12
New cards

What is the Load-Use Data Hazard?

A specific data hazard where an instruction needs data that is currently being loaded from memory by the previous instruction. Even with forwarding, a 1-cycle stall (bubble) is usually required because the data isn't available until the MEM stage.

13
New cards

How can a compiler resolve Data Hazards?

By re-ordering instructions (Code Scheduling) to separate dependent instructions with independent code, filling the slots that would otherwise require stalls.

14
New cards

What is Branch Prediction?

A method to resolve Control Hazards where the processor guesses the outcome of a branch (e.g., 'assume branch not taken') and loads the pipeline based on that guess (Speculative Execution).

15
New cards

What happens if Branch Prediction is incorrect?

The pipeline must be flushed (cleared of the incorrect instructions), causing a stall/penalty, and the correct instructions must be fetched.

16
New cards

What is Delayed Decision (Delayed Branching)?

A software/compiler technique where an instruction that does not affect the branch decision is moved to the 'slot' immediately after the branch instruction, so it executes while the branch is being resolved.

17
New cards

What is 'Deep' or 'Super' Pipelining?

Increasing the number of pipeline stages (e.g., 8 or more). This allows for faster clock speeds and higher throughput but makes the processor more vulnerable to hazards and requires more complex logic.

18
New cards

What is Superscalar Architecture?

A processor architecture that implements Instruction Level Parallelism (ILP) by having multiple execution units (pipelines) allowing it to fetch and execute more than one instruction per clock cycle.

19
New cards

What is Out-of-Order Execution?

A technique where instructions are executed as soon as their input operands are available, rather than strictly in the order they appear in the program. Results are reassembled in order at a 'graduation' or 'retire' stage.

20
New cards

What is ILP?

Instruction Level Parallelism: A measure of how many operations in a computer program can be performed simultaneously.

21
New cards

What is TLP?

Thread Level Parallelism: Parallelism achieved by splitting a program into multiple threads that run simultaneously.

22
New cards

What is SMP?

Symmetric Multiprocessing: An architecture with two or more identical processors connected to a single shared main memory.

23
New cards

What is SISD (Flynn's Taxonomy)?

Single Instruction, Single Data: A sequential computer (traditional Uniprocessor) with no parallelism in instruction or data streams.

24
New cards

What is SIMD (Flynn's Taxonomy)?

Single Instruction, Multiple Data: A single instruction is applied to multiple data streams simultaneously. Common in GPUs and vector processors (e.g., for graphics or matrix math).

25
New cards

What is MIMD (Flynn's Taxonomy)?

Multiple Instruction, Multiple Data: Multiple autonomous processors simultaneously executing different instructions on different data (e.g., Multi-core processors, Distributed systems).

26
New cards

What is MISD (Flynn's Taxonomy)?

Multiple Instruction, Single Data: A rare architecture where multiple instructions operate on a single data stream (sometimes used in fault-tolerant systems).

27
New cards

What are the benefits of a Vector Processor (SIMD)?

It uses a single instruction to operate on a set of registers (vectors), saving fetch/decode bandwidth and resulting in smaller code size and efficient memory use for data-parallel tasks.