Types of Processors

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/13

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

14 Terms

1
New cards

Reduced Instruction Set Computer (RISC)

  • A processor architecture that uses a small, simple set of instructions.

  • Each instruction is designed to execute in one clock cycle, allowing fast processing.

  • Relies on software to perform complex tasks by combining simple instructions rather than complex hardware, making a simpler, faster, and more power-efficient processor.

  • Commonly used in smartphones and tablets.

2
New cards

Complete Instruction Set Computer (CISC)

  • A processor architecture that uses a large set of complex instructions, each capable of performing multiple low-level operations within a single instruction.

  • This reduces the number of instructions per program and simplifies assembly programming. However, CISC instructions take multiple clock cycles to execute, which can reduce efficiency compared to RISC.

  • Commonly used in desktop and laptop processors.

3
New cards

Von Neumann Architecture

  • A computer architecture model where the processor, memory, and input/output devices share a single communication bus, using a single memory space to store both data and instructions.

  • The processor fetches instructions and data sequentially from memory via the Fetch-Decode-Execute cycle.

  • This architecture simplifies design but can cause a bottleneck known as the Von Neumann bottleneck, where the processor waits for data/instructions because they share the same bus.

4
New cards

Harvard Architecture

  • A computer architecture where the processor uses separate memory and buses for data and instructions.

  • This allows the processor to fetch instructions and data simultaneously, improving performance by avoiding the bottleneck seen in Von Neumann architecture.

  • Commonly used in embedded systems and microcontrollers where speed and efficiency are crucial.

  • However, having separate memory increases hardware complexity and cost.

5
New cards

Superscalar Architecture

  • Allows a processor to execute more than one instruction per clock cycle by having multiple execution units.

  • It fetches, decodes, and executes multiple instructions in parallel within a single cycle, increasing instruction throughput.

  • This improves processor performance by making better use of processor resources and reducing idle time.

  • However, performance gains depend on the ability to find independent instructions that can be processed simultaneously.

6
New cards

Embedded System

  • A specialised computer designed to perform a dedicated function, often as part of a larger device.

  • Typically including a CPU, memory, and connected input/output interfaces to control specific hardware efficiently.

  • Optimised for speed, reliability, and low power consumption, rather than general-purpose computing.

  • Examples include washing machines, smart thermostats, and automotive control systems.

7
New cards

CPU

  • General-purpose processor designed for sequential tasks and system control.

  • Has fewer, more complex cores.

  • Each core is optimised for high clock speed and task flexibility.

  • Ideal for running the operating system, logic-heavy tasks, and varied instruction types.

8
New cards

General Processing Units (GPUs)

  • A specialised processor designed to handle graphics rendering and image processing.

  • Containing hundreds or thousands of smaller, simpler cores to perform parallel processing, making it highly efficient at handling tasks.

  • Also used for general-purpose computing tasks that benefit from parallelism.

  • By offloading graphics tasks from the CPU, the GPU improves overall system performance, especially in gaming and multimedia applications.

  • Optimised for SIMD (Single Instruction, Multiple Data) execution.

  • Ideal for graphics rendering, AI, and scientific simulations where the same operation is applied to large data sets.

9
New cards

Digital Signal Processors (DSPs)

  • A specialised microprocessor designed to perform fast mathematical operations on continuous data streams such as sound, images, or sensor signals.

  • It is optimised for real-time processing, using features like parallel execution, pipelining, and dedicated hardware multipliers to handle repetitive arithmetic efficiently.

  • Improves performance in systems that require constant, rapid computation — by offloading these intensive tasks from the main processor.

10
New cards

Single Instruction, Multiple Data (SIMD)

  • A type of parallel processing where the same instruction is applied simultaneously to multiple pieces of data.

  • Allowing one control unit to broadcast a single instruction to multiple processing elements, each working on different data values in parallel.

  • Highly efficient for data-parallel tasks such as image processing, graphics rendering, and scientific simulations.

  • It increases throughput by reducing the number of instructions needed to process large data sets.

11
New cards

Multiple Instruction, Multiple Data (MIMD)

  • A type of parallel processing where different instructions are executed simultaneously on different data by multiple processors or cores.

  • Each processor operates independently, allowing tasks with varying operations to be processed concurrently.

  • Ideal for complex, mixed workloads, such as running multiple programs, simulations, or server-based tasks, improving overall throughput and system performance.

12
New cards

Distributed Computing

  • A system where a task is divided across multiple computers or nodes, which work together over a network to complete it.

  • Each node performs part of the task simultaneously, often communicating intermediate results to coordinate the overall task.

  • This improves speed, scalability, and fault tolerance, as workloads are shared and no single machine handles everything.

  • It is commonly used in cloud computing, large-scale simulations, and server farms.

13
New cards

Multicore Systems

  • Has multiple processor cores connected into a single processor chip.

  • Each core can execute instructions independently and simultaneously, enabling true parallel processing. This improves performance by allowing the system to run multiple programs or threads at the same time.

  • More efficient for multitasking and software designed to take advantage of multiple cores.

  • Performance depends on how well software can distribute tasks across the cores.

14
New cards

Parallel Systems

  • Uses multiple processors or cores working simultaneously to execute different parts of a task at the same time.

  • This approach breaks down complex problems into smaller sub-tasks that can be solved simultaneously, significantly reducing overall processing time.

  • Effective for applications that can be divided into independent tasks.

  • Performance improvements depend on the ability to split tasks and coordinate between processors efficiently.