Concurrency 1

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/16

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

17 Terms

1
New cards

Q1: What is concurrency and where do we see it?

Concurrency is when multiple computations or tasks make progress over overlapping time periods. It is seen in scenarios like processes or threads running on different cores, time

2
New cards

Q2: What is a thread and how does it differ from a process?

A thread is often described as a "lightweight" process. Threads share the same virtual address space (sharing the code, data, heap) but maintain their own logical control flow (with independent registers, program counter, and stack). In contrast, separate processes run in independent address spaces.

3
New cards

Q3: How does a new thread get created in a POSIX system?

New threads are created using the pthread_create() function. You provide a pointer to a thread ID variable, optional attributes, a function pointer (that the thread will run), and an argument. The new thread begins execution in the provided function.

4
New cards

Q4: What is the difference between joining and detaching a thread?

Joining (pthread_join): The parent thread waits for the specified thread to finish and can collect its return value.

Detaching (pthread_detach): The thread is marked for automatic resource reclamation after it terminates, so no join is needed—but you lose the ability to retrieve its return value.

5
New cards

Q5: What is a race condition, and how can it affect a concurrent program?

A race condition arises when multiple threads read and write shared data concurrently without proper synchronization. The final outcome becomes unpredictable because thread execution order can vary, resulting in inconsistent or incorrect results.

6
New cards

Q6: How do atomic instructions help avoid race conditions?

Atomic instructions perform a read-modify-write operation as a single uninterruptible step. This guarantees that concurrent access to a shared variable (e.g., for an increment) occurs without interference, preventing race conditions. In ARM architectures, examples include the atomic add and compare-and-swap (CAS) instructions.

7
New cards

Q7: Provide an example of an atomic instruction in ARMv8.1.

An example is the ldadd instruction:
ldadd x0, x1, [x2]
This instruction atomically loads a value from memory at [x2], adds the value in register x0 to it, and stores the result back to [x2].

8
New cards

Q8: What does it mean for a function to be thread-safe?

A function is thread-safe if it can be called concurrently from multiple threads without causing data races or unpredictable behavior. This generally means that it does not use shared mutable state without proper synchronization, or if it does, it protects that state using mechanisms such as mutexes.

9
New cards

Q9: How do mutex locks contribute to thread safety?

Mutex locks (using functions like pthreadmutexlock() and pthreadmutexunlock()) serialize access to critical sections of code. By ensuring that only one thread can enter the critical section at a time, mutexes prevent race conditions when shared data is being accessed or modified.

10
New cards

Q10: What is deadlock in the context of multiple locks, and how can it occur?

Deadlock occurs when two or more threads are waiting indefinitely for resources held by each other. For example, if thread T1 locks resource A and then tries to lock resource B while thread T2 locks resource B and then tries to lock resource A, neither thread can proceed. This circular waiting leads to deadlock.

11
New cards

Q11: What strategies can be used to avoid deadlock when acquiring multiple locks?

Deadlock can be avoided by:

-Always acquiring locks in a fixed (global) order.

-Using techniques like random back-off to retry lock acquisition if a conflict is detected.

-Designing the system to reduce the need for multiple simultaneous locks, if possible.

12
New cards

Q12: What are condition variables, and why are they useful?

Condition variables allow threads to wait for certain conditions (e.g., a queue not being empty) while automatically releasing a mutex during the wait. When the condition is signaled (via pthread_cond_signal() or pthread_cond_broadcast()), the waiting thread(s) re-acquire the mutex and check the condition again. They help in avoiding busy waiting while synchronizing threads.

13
New cards

Q13: Why must a thread re-check the condition after being signaled on a condition variable?

Because spurious wake-ups can occur or another thread may have already acted upon the condition, it is necessary to re-check the condition (typically using a while loop) after being signaled to ensure that the required state is actually met before continuing.

14
New cards

Q14: What is a spin lock, and what is a drawback of a naïve spin-lock implementation?

A spin lock is a lock where a thread repeatedly checks (spins) until the lock becomes available. A naïve spin-lock can cause excessive cache invalidation between cores when the lock variable is constantly updated (using compare-and-swap), leading to performance degradation due to high memory coherence traffic.

15
New cards

Q15: How can spin locks be improved in a multi-core environment?

-First polling the lock value repeatedly (using a simple load) until it appears free, reducing inter-core traffic because the variable can be cached.

-Then performing an atomic operation (CAS) to actually acquire the lock.

-Also employing memory barriers (such as dmb sy on ARM) to prevent memory reordering around the lock acquisition and release.

16
New cards

Q16: What is thread non-determinism, and how does it relate to concurrent programming?

Non-determinism in concurrent programming means that the order in which threads execute may vary from one run to the next, leading to different outcomes—even with the same initial conditions. This non-determinism is a natural consequence of scheduling differences and can make debugging concurrent programs challenging.

17
New cards

What is a lock?

A lock is a synchronization mechanism used in concurrent programming to ensure that only one thread accesses a critical section (or shared resource) at a time. When a thread acquires a lock, other threads attempting to acquire the same lock must wait until it is released, preventing race conditions and ensuring data consistency. Locks come in various forms (such as mutexes and spin locks) and are central to managing thread safety in multi-threaded environments.