1/24
Italic from chatgpt
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Concurrency Categories
Physical Concurrency
Logical Concurrency
Physical Concurrency
Concurrency Category where Multiple independent processors (multiple threads of control)
Means there are actually multiple processors or cores working at the same time
Each processor can run its own thread—so multiple threads truly run in parallel
E.G: A quad-core CPU running four threads, one per core
Logical Concurrency
Concurrency category where the appearance of physical concurrency is presented by time-sharing one processor (software can be designed as if there were multiple threads of control)
even if there’s only one processor, it switches quickly between tasks (this is called time-sharing)
To the user and software, it looks like the threads are running at the same time, but they’re just taking turns
This is often used in systems where true parallelism isn’t available
Thread Of Control
It is the sequence of program points reached as control flows through the program
It is like a single path of execution in a program
It follows the sequence of statements and function calls, tracking what the program is doing at any point
If it has multiple of this, it means it can do multiple things at once
Concurrency Motivations
Multiprocessor computers capable of physical concurrency are now widely used
Very common nowadays to have computers with multiple processors
Even if a machine has just one processor, a program written to use concurrent execution can be fast than the same program written for non-concurrent execution
Even if the computer has one processor, concurrency can improve performance
It allows one task to continue while another waits, making better use of time
Involves a different way of designing software that can be very useful—many real-world situations involve concurrency
Many real-life systems such as servers, games, OS, need to handle multiple things at once
Many program applications are now spread over multiple machines, either locally or over a network
Modern apps often run across multiple devices or computers
Concurrency makes it easier to coordinate and manage these distributed parts efficiently
Task
Could be a process or a thread is a program unit that can be in concurrent execution with other program units
These are small, independent units of execution within a program
They can run concurrently with other parts of the program
It differs from ordinary subprograms in that
It may be implicitly started
A task begin automatically without a direct function call
When a program unit starts the execution of a task, it is not necessarily suspended
When you start a task, the rest of the program doesn’t have to wait for it to finish—it can continue running
When a task’s execution is completed, control may not return to the caller
Unlike regular functions that always return to the caller, a task might end without handing control back
It usually work together
It is designed to work together—for example, one might handle input while another processes data
Subprogram-level concurrency
It means breaking down a program into multiple tasks that can run independently or at the same time.
This makes programs faster, more responsive, and better at multitasking
Tasks Categories
Heavyweight Tasks
Lightweight Tasks
Heavyweight Tasks
It is a task that execute in their own address space
These are like separate programs—they don’t share memory with others
Example are processes in most OS such as Browsers and Music Players
It is more isolated, but more memory and resource intensive
Lightweight Tasks
It is a task that all run in the same address space—more efficient
All tasks share the same memory space (more like threads within one program)
Faster and more efficient, but need careful coordination to avoid conflicts
Example is threads in a single app, like tabs in a browser
Disjoint Task
This is a task where it does not communicate with or affect the execution of any other task in the program in any way
It runs completely independently—no communication or shared variables
Task Synchronization
A mechanism that controls the order in which tasks execute
Its a way to control when and how tasks run in relation to each other
There are two types
Cooperation and Competition
Cooperation Synchronization
Task A must wait for Task B to complete some specific activity before task A can continue its execution, e.g., the producer-consumer problem
Why it’s needed: Tasks rely on one another to produce results or prepare resources before the next can act
Producer-Consumer Problem:
The producer (Task B) puts data into a buffer
The consumer (Task A) waits until there’s something in the buffer before it proceeds
Competition Synchronization
Two or more tasks must use some resource that cannot be simultaneously used, e.g, a shared counter
It is usually provided by mutually exclusive access (approaches are discussed later)
Multiple tasks need access to a shared resource—but only one can use it at a time
Why: To avoid conflicts or corruption when reading/writing shared data
EX: Two tasks updating a shared counter or writing a file take turns
Scheduler
Providing synchronization requires a mechanism for delaying task execution
Sometimes, tasks must wait for others to finish (like in cooperation)
The system must have a way to pause or delay tasks as need
Task execution control is maintained by a program called the scheduler, which maps task execution onto available processors
It decides which task runs next
It ensures that tasks are assigned to available processors in an efficient way
It is the boss that tells each task when it’s their turn to run
New Task
Created but not yet start
Ready Task
Ready to run but not currently running (no available processor)
Blocked Task
Has been running, but cannot continue (usually waiting for some event to occur)
Dead Task
No longer active in any sense
Liveness
It is a characteristic that a program unit may or may not have
In sequential code, it means the unit will eventually complete its execution
In a concurrent environment, a task can easily lose it
It refers to the guarantee that a program or a task will eventually make progress (it won’t get stuck forever)
It can be lost due to issues like waiting forever for a resource or message
Deadlock
It happens when all tasks are waiting for something that can never happen—so no one can proceed
Total loss of liveness
Synchronization Methods
Semaphores
Monitors
Message Passing
Semaphores
It is a data structure consisting of a counter and a queue for storing task descriptors
A task descriptor is a data structure that stores all of the information about the execution state of the task
It can be used to implement guards on the code that accesses shared data structures
It has 2 operations, wait
and release
(signal
)
It can be used to provide competition and cooperation synchronization
They act like traffic signals for tasks: they control who can go, who has to wait, and in what order—ensuring order and safety in concurrent systems
Monitors
Encapsulate the shared data and its operations to restrict access
an abstract data type for shared data
Encapsulates shared data
Provide operations that can access or modify the data
Only one task at a time can be active
This automatically restricts concurrent access, ensuring data consistency without requiring manual locks in most cases
It is like a bank vault, only one task is allowed at a time, everyone else wait outside until the vault is free, the vault ensures data isn’t accessed simultaneously by multiple tasks