Unit 4 updated

Unit 4: Input/Output Organization

Page 1: Overview of Bus Structure

  • Single-Bus Structure: Illustrates the organization for connecting various components such as:

    • Processor

    • Memory

    • I/O devices


Page 2: Memory-Mapped I/O

  • Definition: Portion of memory address space reserved for I/O devices.

  • Shared Address Space: Both memory and I/O can be accessed using the same addressing scheme.

  • Examples:

    • Instructions like Load R2, DATAIN or Store R2, DATAOUT used for I/O operations.

  • Contrasts with I/O Mapped I/O: Separate ports for accessing devices.


Page 3: I/O Device Interface

  • Register Functions:

    • Data Buffer: Holds data transfers.

    • Status Register: Details current state of the device.

    • Control Register: Directs the device operations.


Page 4: Registers in Interfaces

  • Example Registers:

    • DATAIN

    • DATAOUT

    • STATUS

    • Various control signals (e.g., DIRQ, KIRQ)


Page 5: Sample I/O Program

  • Flow of Operations:

    1. Initialize memory pointer with Move #LINE, R0.

    2. Test status with TestBit #0, STATUS to check for character input.

    3. Read character and send to display after confirming readiness.

    4. Check for carriage return and handle line feed appropriately.

    5. Call a subroutine to process the input line.


Page 6: Interrupts

  • Interrupt Mechanism:

    • Allows I/O devices to signal the processor that they are ready, avoiding continuous polling.

  • Advantages:

    • Enables processor to perform other tasks during waiting periods.


Page 7: Interrupt Example

  • Scenario: Extensive computations and displaying results every 10 seconds via a timer.

  • Polling vs Interrupts: Polling wastes time; interrupts permit efficient usage of the CPU.


Page 8: Implementation of Interrupts

  • Dual Routine Approach:

    • COMPUTE and DISPLAY routines allow the processor to handle tasks efficiently, responding to interrupts only as needed.


Page 9: Interrupt Service Routine (ISR)

  • Process Flow:

    1. Complete current instruction before handling interrupt.

    2. Load the program counter with ISR address.

    3. Execute ISR and resume execution of the interrupted program.

    4. Return instructions manage control and inform devices of request acknowledgement.


Page 10: ISR Example Flow

  • Illustrates routine transfer control through interrupts with sample instructions and execution hierarchy.


Page 11: Handling Interrupt Latency

  • Definition: Delay between receiving an interrupt and executing its corresponding routine.

  • Strategies for minimizing latency:

    • Saving registers before executing the ISR.

    • Utilizing shadow registers for interrupt service.


Page 12: Interrupt Hardware

  • Operational Mechanism: Open-drain bus setup for managing multiple interrupt requests via logical OR operation.


Page 13: Enabling/Disabling Interrupts

  • Control Mechanism: Ability to enable/disable interrupts via control bits in the processor's status register.


Page 14: Processor Status Register (PS)

  • Flags: IE flag to control interrupt servicing. Values determine whether interrupts are accepted (1) or ignored (0).


Page 15: Disabling Interrupts During ISRs

  • Execution Flow: Automatic disabling of interrupts to ensure ISR execution integrity and prevent overlapping interrupts.


Page 16: Typical Interrupt Handling Sequence

  1. Device raises an interrupt.

  2. Processor stops current process.

  3. Disables further interrupts.

  4. Processes ISR.

  5. Resumes the interrupted program.


Page 17: Handling Multiple Interrupts

  • Key questions:

    1. Identifying interrupting device.

    2. Accessing appropriate ISR for each device.

    3. Allowing interrupts during active ISRs.

    4. Managing simultaneous requests.


Page 18: Strategies for Identifying Interrupting Devices

  1. Polling.

  2. Vectored Interrupts.

  3. Interrupt Nesting.


Page 19: Polling Technique

  • Process: ISR polls device status registers.

  • Downside: Time wasted on checking unrequested services.


Page 20: Vectored Interrupts

  • Definition: Direct identification from device to processor for quicker ISR execution via interrupt vectors.

  • Table of Vectors: Memory allocated for identifying ISR addresses.


Page 21: Interrupt Nesting

  • Implements priority levels allowing higher priority interrupts to interrupt lower priority processes.


Page 22-23: Managing Simultaneous Requests

  • Protocols:

    • Prioritization by polling when devices share an interrupt line.

    • Daisy chain configurations for multiple devices.


Page 24-30: Direct Memory Access (DMA)

  • DMA Concept: Transfers data blocks without continuous CPU intervention.

  • Operational Mechanics: DMA controller manages transfers, informing CPU upon completion.


Page 31: Bus Structure and I/O Interfaces

  • Connection: Buses consist of address, data, and control lines interface with I/O devices.


Page 32-43: Bus Operations and Arbitration

  • Synchronous vs Asynchronous Buses:

    • Synchronous: Clock-driven and faster data transfer.

    • Asynchronous: Control via handshake protocol, adaptable timings.

  • Bus Arbitration: Process to manage device access to bus resources based on priority.