1/45
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Machine instruction level
High-level language statement level
Unit level
Program level
Concurrency can occur at four levels:
language issues in instruction- and program-level concurrency
Because there are no (2), they are not addressed here
input and output operations
Multiprocessor Architectures
Late 1950s - one general-purpose processor and one or more special-purpose processors for —
program-level concurrency
Multiprocessor Architectures
Early 1960s - multiple complete processors, used for —
instruction-level concurrency
Multiprocessor Architectures
Mid-1960s - multiple partial processors, used for —
Single-Instruction Multiple-Data
Multiple-Instruction Multiple-Data
Multiprocessor Architectures
SIMD machines
MIMD machines
Physical concurrency
Categories of Concurrency:
Multiple independent processors ( multiple threads of control)
Logical concurrency
Categories of Concurrency:
The appearance of physical concurrency is presented by timesharing one processor (software can be designed as if there were multiple threads of control)
thread of control
— in a program is the sequence of program points reached as control flows through the program
physical concurrency
concurrent execution
Motivations for the Use of Concurrency
Multiprocessor computers capable of — are now widely used
Even if a machine has just one processor, a program written to use — can be faster than the same program written for nonconcurrent execution
designing software
locally or over a network
Motivations for the Use of Concurrency
Involves a different way of — that can be very useful many real-world situations involve concurrency
Many program applications are now spread over multiple machines, either —
task or process or thread
Introduction to Subprogram-Level Concurrency
A — is a program unit that can be in concurrent execution with other program units
it usually work together
implicitly started
suspended
not return to the caller
Introduction to Subprogram-Level Concurrency
Tasks differ from ordinary subprograms in that:
A task may be —
When a program unit starts the execution of a task, it is not necessarily —
When a task’s execution is completed, control may —
Heavyweight tasks
Two General Categories of Tasks
— execute in their own address space
Lightweight tasks
Two General Categories of Tasks
all run in the same address space – more efficient
disjoint
A task is — if it does not communicate with or affect the execution of any other task in the program in any way
Task Synchronization
A mechanism that controls the order in which tasks execute
Cooperation synchronization
Competition synchronization
Two kinds of task synchronization
Cooperation
producer-consumer problem
Kinds of synchronization
—: Task A must wait for task B to complete some specific activity before task A can continue its execution
e.g., the —
Competition
shared counter
mutually exclusive access
Kinds of synchronization
—: Two or more tasks must use some resource that cannot be simultaneously used
e.g., a —
is usually provided by —
Depending on order, there could be four different results
Need for Competition Synchronization:
Task A: TOTAL = TOTAL + 1
Task B: TOTAL = 2 * TOTAL
Scheduler
Providing synchronization requires a mechanism for delaying task execution
scheduler
Task execution control is maintained by a program called the —, which maps task execution onto available processors
new
ready
running
blocked
dead
5 Task Execution States
New
Task Execution States
— created but not yet started
Ready
Task Execution States
— to run but not currently running (no available processor)
Blocked
Task Execution States
— has been running, but cannot now continue (usually waiting for some event to occur)
Dead
Task Execution States
— no longer active in any sense
Task Execution States
Task —
Liveness
complete its execution
— is a characteristic that a program unit may or may not have
In sequential code, it means the unit will eventually —
deadlock
In a concurrent environment, a task can easily lose its liveness
If all tasks in a concurrent environment lose their liveness, it is called —
Semaphores
Monitors
Message Passing
3 Methods of Providing Synchronization
semaphore
a data structure consisting of a counter and a queue for storing task descriptors
task descriptor
a data structure that stores all of the relevant information about the execution state of the task
guards on the code
Semaphores can be used to implement — that accesses shared data structures
wait and release (or signal)
Semaphores have only two operations
competition and cooperation synchronization
Semaphores can be used to provide both
encapsulate
Monitor
The idea: — the shared data and its operations to restrict access
monitor
an abstract data type for shared data
Message passing
a general model for concurrency
semaphores and monitors
competition synchronization
Message passing
It can model both —
It is not just for —
task communication
Message passing
Central idea: — is like seeing a doctor--most of the time she waits for you or you wait for her, but when you are both ready, you get together, or rendezvous
Concurrent execution
Summary
— can be at the instruction, statement, or subprogram level
Physical concurrency
Summary
— when multiple processors are used to execute concurrent units
Logical concurrency
Summary
— concurrent united are executed on a single processor
semaphores, monitors, rendezvous, threads
Summary
4 Mechanism