Chapter 27 Instruction-Level Parallelism & Superscalar Processors

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/19

flashcard set

Earn XP

Description and Tags

20 vocabulary flashcards covering core terms and concepts from the lecture on scalar, superscalar, and superpipelined processors, instruction-level parallelism, dependency types, issue policies, and key superscalar implementation mechanisms.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

20 Terms

1
New cards

Scalar Processor

CPU that operates on one data element at a time, typically with one pipelined integer unit and one pipelined floating-point unit.

2
New cards

Superscalar Processor

Architecture that contains multiple parallel pipelined functional units so that several independent instructions can be fetched, decoded, issued, and executed simultaneously.

3
New cards

Superpipelining

Technique that splits each pipeline stage into smaller substages, allowing more than one pipeline stage to be completed per clock cycle and increasing instruction throughput (e.g., degree-2 superpipeline).

4
New cards

Instruction-Level Parallelism (ILP)

The average amount of parallel execution possible among the instructions of a program, limited by data, procedural, and resource dependencies.

5
New cards

Machine Parallelism

A measure of how well the hardware can exploit ILP; depends on the number of parallel pipelines and the sophistication/speed of dependency-checking mechanisms.

6
New cards

Operation Latency

Time from the start of an instruction until its result becomes available for use by subsequent instructions; directly affects the delay caused by dependencies.

7
New cards

True Data Dependency (Read-After-Write)

Constraint where an instruction needs data produced by a preceding instruction, preventing parallel execution until the data is ready.

8
New cards

Procedural Dependency

Dependency created by control-flow instructions (e.g., branches); following instructions cannot execute until the branch outcome is known.

9
New cards

Resource Conflict

Competition among two or more instructions for the same hardware resource (e.g., functional unit, cache port, register-file port) at the same time.

10
New cards

Storage Conflict

Class of hazards arising when multiple instructions contend for the same storage location; includes output dependencies and antidependencies.

11
New cards

Output Dependency (Write-After-Write)

Hazard occurring when two instructions attempt to write to the same register or memory location, requiring ordering to preserve final value.

12
New cards

Antidependency (Write-After-Read)

Hazard where a later instruction writes to a location that a previous instruction still needs to read, forcing serialization.

13
New cards

Instruction Issue Policy

Rule set that governs how decoded instructions enter the execution stage (e.g., in-order issue, out-of-order issue).

14
New cards

In-Order Issue / In-Order Completion

Policy where instructions are issued and their results are written back strictly in program order; stalls occur on any conflict or dependency.

15
New cards

In-Order Issue / Out-of-Order Completion

Policy that issues instructions sequentially but allows them to finish and write results whenever their execution completes, reducing some stalls.

16
New cards

Out-of-Order Issue / Out-of-Order Completion

Policy that allows instructions to be issued and completed whenever resources and dependencies permit, maximizing pipeline utilization.

17
New cards

Instruction Window

Buffer that holds decoded instructions awaiting issue; enables the processor to look ahead and choose independent instructions for out-of-order execution.

18
New cards

Register Renaming

Dynamic allocation of physical registers to eliminate storage conflicts; each new value is written to a fresh physical register, and source operands are remapped accordingly.

19
New cards

Branch Prediction

Superscalar fetch strategy that guesses the outcome of conditional branches to continue fetching multiple instructions without waiting for branch resolution.

20
New cards

Commit (Retirement)

Mechanism that finalizes instruction results and updates architectural state in correct program order after out-of-order execution has completed.