PL Concurrency

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/24

flashcard set

Earn XP

Description and Tags

Italic from chatgpt

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

25 Terms

1
New cards

Concurrency Categories

  • Physical Concurrency

  • Logical Concurrency

2
New cards

Physical Concurrency

  • Concurrency Category where Multiple independent processors (multiple threads of control)

    • Means there are actually multiple processors or cores working at the same time

    • Each processor can run its own thread—so multiple threads truly run in parallel

    • E.G: A quad-core CPU running four threads, one per core

3
New cards

Logical Concurrency

  • Concurrency category where the appearance of physical concurrency is presented by time-sharing one processor (software can be designed as if there were multiple threads of control)

    • even if there’s only one processor, it switches quickly between tasks (this is called time-sharing)

    • To the user and software, it looks like the threads are running at the same time, but they’re just taking turns

    • This is often used in systems where true parallelism isn’t available

4
New cards

Thread Of Control

  • It is the sequence of program points reached as control flows through the program

    • It is like a single path of execution in a program

    • It follows the sequence of statements and function calls, tracking what the program is doing at any point

    • If it has multiple of this, it means it can do multiple things at once

5
New cards

Concurrency Motivations

  • Multiprocessor computers capable of physical concurrency are now widely used

    • Very common nowadays to have computers with multiple processors

  • Even if a machine has just one processor, a program written to use concurrent execution can be fast than the same program written for non-concurrent execution

    • Even if the computer has one processor, concurrency can improve performance

    • It allows one task to continue while another waits, making better use of time

  • Involves a different way of designing software that can be very useful—many real-world situations involve concurrency

    • Many real-life systems such as servers, games, OS, need to handle multiple things at once

  • Many program applications are now spread over multiple machines, either locally or over a network

    • Modern apps often run across multiple devices or computers

    • Concurrency makes it easier to coordinate and manage these distributed parts efficiently

6
New cards

Task

  • Could be a process or a thread is a program unit that can be in concurrent execution with other program units

    • These are small, independent units of execution within a program

    • They can run concurrently with other parts of the program

  • It differs from ordinary subprograms in that

    • It may be implicitly started

      • A task begin automatically without a direct function call

    • When a program unit starts the execution of a task, it is not necessarily suspended

      • When you start a task, the rest of the program doesn’t have to wait for it to finish—it can continue running

    • When a task’s execution is completed, control may not return to the caller

      • Unlike regular functions that always return to the caller, a task might end without handing control back

  • It usually work together

    • It is designed to work together—for example, one might handle input while another processes data

7
New cards

Subprogram-level concurrency

  • It means breaking down a program into multiple tasks that can run independently or at the same time.

  • This makes programs faster, more responsive, and better at multitasking

8
New cards

Tasks Categories

  • Heavyweight Tasks

  • Lightweight Tasks

9
New cards

Heavyweight Tasks

  • It is a task that execute in their own address space

    • These are like separate programs—they don’t share memory with others

    • Example are processes in most OS such as Browsers and Music Players

    • It is more isolated, but more memory and resource intensive

10
New cards

Lightweight Tasks

  • It is a task that all run in the same address space—more efficient

    • All tasks share the same memory space (more like threads within one program)

    • Faster and more efficient, but need careful coordination to avoid conflicts

    • Example is threads in a single app, like tabs in a browser

11
New cards

Disjoint Task

  • This is a task where it does not communicate with or affect the execution of any other task in the program in any way

    • It runs completely independently—no communication or shared variables

12
New cards

Task Synchronization

  • A mechanism that controls the order in which tasks execute

    • Its a way to control when and how tasks run in relation to each other

  • There are two types

    • Cooperation and Competition

13
New cards

Cooperation Synchronization

  • Task A must wait for Task B to complete some specific activity before task A can continue its execution, e.g., the producer-consumer problem

    • Why it’s needed: Tasks rely on one another to produce results or prepare resources before the next can act

    • Producer-Consumer Problem:

      • The producer (Task B) puts data into a buffer

      • The consumer (Task A) waits until there’s something in the buffer before it proceeds

14
New cards

Competition Synchronization

  • Two or more tasks must use some resource that cannot be simultaneously used, e.g, a shared counter

    • It is usually provided by mutually exclusive access (approaches are discussed later)

    • Multiple tasks need access to a shared resource—but only one can use it at a time

    • Why: To avoid conflicts or corruption when reading/writing shared data

    • EX: Two tasks updating a shared counter or writing a file take turns

15
New cards

Scheduler

  • Providing synchronization requires a mechanism for delaying task execution

    • Sometimes, tasks must wait for others to finish (like in cooperation)

    • The system must have a way to pause or delay tasks as need

  • Task execution control is maintained by a program called the scheduler, which maps task execution onto available processors

    • It decides which task runs next

    • It ensures that tasks are assigned to available processors in an efficient way

    • It is the boss that tells each task when it’s their turn to run

16
New cards

New Task

Created but not yet start

17
New cards

Ready Task

Ready to run but not currently running (no available processor)

18
New cards

Blocked Task

Has been running, but cannot continue (usually waiting for some event to occur)

19
New cards

Dead Task

No longer active in any sense

20
New cards

Liveness

  • It is a characteristic that a program unit may or may not have

    • In sequential code, it means the unit will eventually complete its execution

  • In a concurrent environment, a task can easily lose it

    • It refers to the guarantee that a program or a task will eventually make progress (it won’t get stuck forever)

    • It can be lost due to issues like waiting forever for a resource or message

21
New cards

Deadlock

  • It happens when all tasks are waiting for something that can never happen—so no one can proceed

  • Total loss of liveness

22
New cards

Synchronization Methods

  • Semaphores

  • Monitors

  • Message Passing

23
New cards

Semaphores

  • It is a data structure consisting of a counter and a queue for storing task descriptors

    • A task descriptor is a data structure that stores all of the information about the execution state of the task

  • It can be used to implement guards on the code that accesses shared data structures

  • It has 2 operations, wait and release (signal)

  • It can be used to provide competition and cooperation synchronization

  • They act like traffic signals for tasks: they control who can go, who has to wait, and in what order—ensuring order and safety in concurrent systems

24
New cards

Monitors

  • Encapsulate the shared data and its operations to restrict access

  • an abstract data type for shared data

    • Encapsulates shared data

    • Provide operations that can access or modify the data

    • Only one task at a time can be active

    • This automatically restricts concurrent access, ensuring data consistency without requiring manual locks in most cases

    • It is like a bank vault, only one task is allowed at a time, everyone else wait outside until the vault is free, the vault ensures data isn’t accessed simultaneously by multiple tasks

25
New cards