1/26
Flashcards based on lecture material, created by Gemini 3 Pro.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
What is Pipelining?
A technique where the execution of an instruction is broken down into smaller subtasks (stages) that are overlapped in parallel. It increases throughput by allowing multiple instructions to be in different stages of execution simultaneously.
Does pipelining improve Latency or Throughput?
It improves Throughput (the amount of work done in a given time). It does not reduce the Latency (execution time) of a single instruction; in fact, latency might slightly increase due to overhead.
What is the theoretical speedup of a pipeline?
Ideally, the speedup is equal to the number of pipe stages, assuming the pipeline is kept full and there are no stalls.
What are the 5 stages of the standard MIPS pipeline?
IF (Instruction Fetch)
ID (Instruction Decode & Register Read)
EX (Execute operation or calculate address)
MEM (Access Memory operand)
WB (Write Back result to register)
What is a Pipeline Hazard?
A situation that prevents the next instruction from executing in the next clock cycle, causing a potential stall or conflict.
What are the three types of Pipeline Hazards?
Structural Hazards
Data Hazards
Control Hazards
Define a Structural Hazard.
A hazard that occurs when the hardware cannot support a specific combination of instructions simultaneously (e.g., two instructions trying to access memory at the exact same time).
Define a Data Hazard.
A hazard that occurs when an instruction depends on the result of a previous instruction that is still in the pipeline and not yet complete (e.g., Read-After-Write dependency).
Define a Control Hazard.
A hazard arising from the pipelining of branches (decisions) and other instructions that change the program counter. The pipeline doesn't know which instruction to fetch next until the branch decision is resolved.
What is a 'Stall' or 'Bubble'?
A delay introduced into the pipeline (often effectively a 'No-Operation' or NOP) to wait for a hazard to resolve, reducing throughput.
What is Forwarding (Bypassing)?
A hardware solution for Data Hazards where extra circuitry passes the result from the ALU (Execute stage) directly to the input of the next instruction, without waiting for the result to be written back to the register file.
What is the Load-Use Data Hazard?
A specific data hazard where an instruction needs data that is currently being loaded from memory by the previous instruction. Even with forwarding, a 1-cycle stall (bubble) is usually required because the data isn't available until the MEM stage.
How can a compiler resolve Data Hazards?
By re-ordering instructions (Code Scheduling) to separate dependent instructions with independent code, filling the slots that would otherwise require stalls.
What is Branch Prediction?
A method to resolve Control Hazards where the processor guesses the outcome of a branch (e.g., 'assume branch not taken') and loads the pipeline based on that guess (Speculative Execution).
What happens if Branch Prediction is incorrect?
The pipeline must be flushed (cleared of the incorrect instructions), causing a stall/penalty, and the correct instructions must be fetched.
What is Delayed Decision (Delayed Branching)?
A software/compiler technique where an instruction that does not affect the branch decision is moved to the 'slot' immediately after the branch instruction, so it executes while the branch is being resolved.
What is 'Deep' or 'Super' Pipelining?
Increasing the number of pipeline stages (e.g., 8 or more). This allows for faster clock speeds and higher throughput but makes the processor more vulnerable to hazards and requires more complex logic.
What is Superscalar Architecture?
A processor architecture that implements Instruction Level Parallelism (ILP) by having multiple execution units (pipelines) allowing it to fetch and execute more than one instruction per clock cycle.
What is Out-of-Order Execution?
A technique where instructions are executed as soon as their input operands are available, rather than strictly in the order they appear in the program. Results are reassembled in order at a 'graduation' or 'retire' stage.
What is ILP?
Instruction Level Parallelism: A measure of how many operations in a computer program can be performed simultaneously.
What is TLP?
Thread Level Parallelism: Parallelism achieved by splitting a program into multiple threads that run simultaneously.
What is SMP?
Symmetric Multiprocessing: An architecture with two or more identical processors connected to a single shared main memory.
What is SISD (Flynn's Taxonomy)?
Single Instruction, Single Data: A sequential computer (traditional Uniprocessor) with no parallelism in instruction or data streams.
What is SIMD (Flynn's Taxonomy)?
Single Instruction, Multiple Data: A single instruction is applied to multiple data streams simultaneously. Common in GPUs and vector processors (e.g., for graphics or matrix math).
What is MIMD (Flynn's Taxonomy)?
Multiple Instruction, Multiple Data: Multiple autonomous processors simultaneously executing different instructions on different data (e.g., Multi-core processors, Distributed systems).
What is MISD (Flynn's Taxonomy)?
Multiple Instruction, Single Data: A rare architecture where multiple instructions operate on a single data stream (sometimes used in fault-tolerant systems).
What are the benefits of a Vector Processor (SIMD)?
It uses a single instruction to operate on a set of registers (vectors), saving fetch/decode bandwidth and resulting in smaller code size and efficient memory use for data-parallel tasks.