Detailed Study Guide on RISC, CISC, and Parallel Processing

AZ-PAPER 3-BOOK-3 Overview

  • Presented by: Sir Aqib Khan

  • Covers: RISC, CISC & Parallel Processing

  • Context: A-Level Computer Science, Course Code P3-9618

Definitions

  • CISC (Complex Instruction Set Computer)

    • Large instruction set with many instruction formats.

    • Supports multiple addressing modes.

    • Utilizes multi-cycle instructions, resulting in longer execution times.

    • Allows variable-length instructions.

    • More complex decoding processes.

    • Pipelining is more challenging.

    • Emphasizes hardware design.

    • Uses memory units for complex instruction execution.

  • RISC (Reduced Instruction Set Computer)

    • Smaller instruction set with fewer instruction formats.

    • Supports fewer addressing modes.

    • Utilizes single-cycle instructions, resulting in faster execution times.

    • Instructions are fixed in length.

    • Simplified instruction decoding supports easier pipelining.

    • Emphasizes software design.

    • More efficient use of RAM with complex control units.

Key Concepts

Processor Architectures

RISC and CISC
  1. CISC Processors

    • More complex instructions requiring multi-cycle execution, hence less optimization for speed.

    • Example:

      • Assembly instruction: ADD A, B - requires multiple sub-instructions for addition.

    • Result: Shorter assembly code but higher computation load on the processor.

  2. RISC Processors

    • Focuses on simpler, optimized instructions.

    • Example sequence to add two numbers (A & B):

      • LOAD X, A (load A into register X)

      • LOAD Y, B (load B into register Y)

      • ADD X, Y (add contents in X and Y)

      • STORE Z (store result in Z)

    • Each instruction completes in one clock cycle, leading to optimized performance.

Differences Between RISC and CISC
  • CISC Features:

    • Complex instructions, variable formats, multi-cycle operation.

    • Longer execution time, complex decoding, hardware emphasis.

    • More addressing modes, complex pipelining.

  • RISC Features:

    • Simpler instructions, fixed formats, single-cycle operation.

    • Faster execution time, easier pipelining, software emphasis.

    • More use of RAM, hardwired control units, fewer registers.

Performance Enhancements

Pipelining

  • Definition:

    • Pipelining allows concurrent processing of multiple instructions.

    • Execution is divided into 5 stages:

    1. Instruction Fetch (IF)

    2. Instruction Decode (ID)

    3. Operand Fetch (OF)

    4. Instruction Execution (IE)

    5. Write-back (WB)

  • Illustration of Pipelining with Example Instructions (A-F):

    • Each instruction utilizes one clock cycle per stage, leading to systematic overlap in execution, resulting in efficient use of clock cycles.

    • Simulation shows that completing six instructions takes 10 clock cycles with pipelining, compared to 30 cycles without, thus saving cycles.

Interrupt Handling

  • Definition:

    • An interrupt is a signal sent from external devices to the OS to request attention.

  • Handling Process:

    • The processor pauses the current program upon detecting an interrupt, saves register states, and services the interrupt.

    • Following complex pipelining, handling interrupts necessitates discarding instructions in the pipeline except for the last one in the write-back phase.

Parallel Processing

Overview

  • Definition:

    • Parallel processing involves executing multiple processes simultaneously to improve performance.

  • Architectural Models:

    • SISD (Single Instruction Single Data):

    • One processor handles one instruction at a time.

    • No parallel processing capability; generally found in older systems.

    • SIMD (Single Instruction Multiple Data):

    • Multiple processors execute the same instruction on different data sets, exemplified in graphics processing (e.g., altering pixel brightness).

    • MISD (Multiple Instruction Single Data):

    • Multiple processors perform different instructions on a single data source.

    • MIMD (Multiple Instruction Multiple Data):

    • Multiple processors execute different instructions on different data sources.

Practical Applications

  1. Programming Requirements for Parallel Processing:

    • Programs must be segmented into blocks executable in parallel.

    • Each processor must efficiently manage separate blocks of code.

  2. Examples:

    • Applications utilizing SIMD for 3D graphics and SIMD processors in mass image processing.

Conclusion

  • Future Directions:

    • The shift towards RISC architectures highlights the ongoing trend towards simplification and optimization in processor design for enhanced performance.

    • Parallel processing remains critical for achieving scalability in advanced computing systems, reflecting the necessity for maximizing computational capabilities.