1/35
A comprehensive set of question-and-answer flashcards covering key concepts from Week 10 on parallel architectures, performance metrics, and concurrent programming approaches.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What is the basic idea behind parallel computing?
Using multiple computing resources (CPUs/cores/nodes) simultaneously to solve parts of a problem concurrently, rather than executing one instruction at a time on a single CPU.
In a traditional serial program, how many instructions can execute at any moment?
Only one instruction executes at any given moment.
Give four types of parallelism named in the lecture objectives.
Instruction Parallelism, Data Parallelism, Thread Parallelism/Multitasking, and Task (or Request) Parallelism.
What is strong scaling?
How performance improves when more resources (e.g., CPUs) are added to solve a problem of fixed size.
What is weak scaling?
How performance is maintained when both the problem size and the computational resources increase proportionally.
State two physical limits that constrain ever-faster single-CPU designs.
(1) Speed of light/transmission delays, (2) Limits to transistor miniaturization.
Why can building many moderate-speed CPUs be cheaper than one very fast CPU?
Economic limitations: commodity processors are less expensive than pushing a single processor to extreme speeds.
Describe instruction-level parallelism (ILP) in one sentence.
Executing multiple instructions within one clock cycle through pipelining or superscalar hardware.
How does a Very Long Instruction Word (VLIW) architecture express parallelism?
It packs several independent operations (e.g., int, float, load, store) into one long instruction that hardware issues simultaneously.
What problem does on-chip multithreading aim to solve?
Pipeline stalls; if one thread blocks, another can use the pipeline, keeping the CPU busy.
Pentium 4’s marketing term for simultaneous multithreading was .
Hyper-Threading
Define a homogeneous multiprocessor on a chip.
A single silicon die that contains multiple identical CPU cores sharing on-chip caches and other resources.
What is a co-processor?
A dedicated specialised processor attached to a system to accelerate a specific task (e.g., GPU, network processor, crypto engine).
In a symmetric multiprocessor (SMP), how is memory shared?
All CPUs share a single, uniform physical memory space managed by one operating system.
Give one key difference between a multiprocessor and a multicomputer.
Multiprocessor: shared memory; Multicomputer: each CPU has its own private memory and communicates via a network.
Name three common network topologies for multicomputers mentioned in the slides.
Star, Ring, Hypercube (others acceptable: tree, grid, double-torus, complete graph).
What is cluster computing?
A tightly linked group of independent computers working together (often over LAN/WAN) for high availability, load balancing or high performance tasks.
How does grid computing differ from cluster computing?
It aggregates resources from multiple administrative domains into a large virtual system, often combining multiple clusters.
List three technical characteristics of cloud computing highlighted in the lecture.
Pay-per-use pricing, elastic (on-demand) capacity, self-service/programmable interface with virtualised resources.
What are the four categories in Flynn’s taxonomy?
SISD, SIMD, MISD, MIMD.
Give an example of a SIMD machine.
Vector or array processor executing one instruction on multiple data elements (e.g., GPU shader cores).
Name three common performance metrics for processors.
Clock speed, MIPS (Millions of Instructions per Second), FLOPS (Floating-Point Operations per Second).
Why can MIPS be misleading as a sole performance measure?
Different instruction sets have varying complexity, so counting instructions doesn’t necessarily reflect useful work done.
What do benchmark tests measure?
The time a computer takes to complete standard application-level tasks, providing comparative performance scores.
State two hardware tactics that generally increase computer performance, per the ‘Factors affecting performance’ slide.
Increasing clock speed; widening the data bus (others include bigger caches, more processors, higher transfer rates).
In an iterative server model, what happens to additional clients while one request is being processed?
They are blocked and must wait until the server finishes the current request.
What kernel feature allows process-based concurrent servers to handle multiple clients?
The operating system schedules multiple independent processes, each with its own private address space.
Define I/O multiplexing in the context of event-based servers.
A technique (e.g., select/poll/epoll) where one thread checks multiple file descriptors to see which have pending I/O, manually interleaving flows.
How do thread-based servers combine advantages of process- and event-based models?
Threads are scheduled by the kernel like processes but share the same address space, easing data sharing while avoiding manual multiplexing.
What is a race condition?
A situation where a program’s outcome depends on the unpredictable order of concurrent events or thread scheduling.
Define deadlock.
A state where two or more tasks are each waiting for resources held by the others, preventing any of them from progressing.
What is livelock or starvation?
A scenario where tasks remain active but cannot make progress because scheduling or repeated conflicts continually postpone them.
Name two basic mechanisms for achieving mutual exclusion and synchronization mentioned in the slides.
Locks and monitors (other acceptable: sleep/wait primitives).
What large scientific project illustrates the need for exascale parallel processing in the lecture?
The Square Kilometre Array (SKA) radio telescope, producing exabyte-scale daily data.
Why does the SKA require a high-performance interconnection network?
Because its geographically distributed sensors and data centers must move hundreds of terabits per second among processing tiers.
Complete the sentence: ‘When CPUs/nodes are close together, the system is called coupled; when remote, coupled.’
tightly; loosely