1/78
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
multithreaded
Most modern applications are —
Threads run within application
Update display
Fetch data
Spell checking
Answer a network request
Multiple tasks with the application can be implemented by separate threads (4)
Kernels
Process creation is heavy-weight while thread creation is light-weight
Can simplify code, increase efficiency
— are generally multithreaded
responsiveness
may allow continued execution if part of process is blocked, especially important for user interfaces
resources sharing
threads share resources of process, easier than shared memory or message passing
economy
cheaper than process creation, thread switching lower overhead than context switching
scalability
process can take advantage of multicore architectures
Multicore or multiprocessor systems
puts pressure on programmers
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
challenges include (5)
parallelism
implies a system can perform more than one task simultaneously
concurrency
supports more than one task making progress
Single processor / core, scheduler providing it
data parallelism
distributes subsets of the same data across multiple cores, same operation on each
task parallelism
distributing threads across cores, each thread performing unique operation
Amdahl’s Law
Identifies performance gains from adding additional cores to an application that has both serial and parallel components
serial portion
processing cores
S is —
N —
That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times
As N approaches infinity, speedup approaches 1 / S
adding additional cores
Serial portion of an application has disproportionate effect on performance gained by —
But does the law take into account contemporary multicore systems?
user threads
management done by user-level threads library
• POSIX Pthreads
• Windows threads
• Java threads
Three primary thread libraries
Kernel threads
Supported by the Kernel
• Windows
• Linux
• Mac OS X
• iOS
• Android
Examples – virtually all general-purpose operating systems, including: (5)
Many-to-One
One-to-One
Many-to-Many
3 Multithreading Models
Many-to-One
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on multicore system because only one may be in kernel at a time
• Solaris Green Threads
• GNU Portable Threads
Few systems currently use this model (2)
One-to-One
windows and linux
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
2 examples
Many-to-Many Model
ThreadFiber
Allows many user level threads to be mapped to many kernel threads
Allows the operating system to create a sufficient number of kernel threads
Windows with the — package
Otherwise not very common
Two-level Model
Similar to M:M, except that it allows a user thread to be bound to kernel thread
Thread library
provides programmer with API for creating and managing threads
• Library entirely in user space
• Kernel-level library supported by the OS
Two primary ways of implementing
Pthreads
May be provided either as user-level or kernel-level
A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
Specification
—, not implementation
API specifies behavior of the thread library, implementation is up to
development of the library
Common in UNIX operating systems (Linux & Mac OS X)
Java threads
are managed by the JVM
Typically implemented using the threads model provided by underlying OS
Thread class
Runnable interface
Java threads may be created by:
• Extending —
• Implementing the —
• Standard practice is to implement the second
Java Executor Framework
Executor interface
Rather than explicitly creating threads, Java also allows thread creation around the
Implicit Threading
Growing in popularity as numbers of threads increase, program correctness more difficult with explicit threads
Creation and management of threads done by compilers and run-time libraries rather than programmers
• Thread Pools
• Fork-Join
• OpenMP
• Grand Central Dispatch
• Intel Threading Building Blocks
Five methods explored
Thread Pools
Create a number of threads in a pool where they await work
Advantages:
Usually slightly faster to service a request with an existing thread than create a new thread
Allows the number of threads in the application(s) to be bound to the size of the pool
Separating task to be performed from mechanics of creating task allows different strategies for running task
i.e,Tasks could be scheduled to run periodically
Windows API supports it
static ExecutorService newSingleThreadExecutor ()
static ExecutorService newFixedThreadPool (int size)
static ExecutorService newCachedThreadPool()
Java Thread Pools: Three factory methods for creating thread pools in Executors class:
Fork-Join Parallelism
Multiple threads (tasks) are forked, and then joined
ForkJoinTask
an abstract base class
RecursiveTask and RecursiveAction
2 classes extend ForkJoinTask
RecursiveTask
returns a result (via the return value from the compute() method)
RecursiveAction
does not return a result
OpenMP
Set of compiler directives and an API for C, C++, FORTRAN
Provides support for parallel programming in shared-memory environments
parallel regions
Identifies — blocks of code that can run in parallel
#pragma omp parallel
Create as many threads as there are cores
Run the for loop in parallel
Grand Central Dispatch
Apple technology for macOS and iOS operating systems
Extensions to C, C++ and Objective-C languages, API, and run-time library
Allows identification of parallel sections
Manages most of the details of threading
Block is in “^{ }” :
ˆ{ printf("I am a block"); }
Blocks placed in dispatch queue
Assigned to available thread in thread pool when removed from queue
serial
main queue
— blocks removed in FIFO order, queue is per process, called —
Programmers can create additional serial queues within program
concurrent
removed in FIFO order but several may be removed at a time
QOS_CLASS_USER_INTERACTIVE
QOS_CLASS_USER_INITIATED
QOS_CLASS_USER_UTILITY
QOS_CLASS_USER_BACKGROUND
Four system wide queues divided by quality of service:
closure
For the Swift language a task is defined as a — similar to a block, minus the caret
It is submitted to the queue using the dispatch_async() function
Intel Threading Building Blocks (TBB)
Template library for designing parallel C++ programs
A serial version of a simple for loop
The same for loop written using TBB with parallel_for statement
Threading Issues
Semantics of fork() and exec() system calls
Signal handling
Synchronous and asynchronous
Thread cancellation of target thread
Asynchronous or deferred
Thread-local storage
Scheduler Activations
Semantics of fork() and exec()
Does fork()duplicate only the calling thread or all threads?
Some UNIXes have two versions of fork
exec() usually works as normal – replace the running process including all threads
Signals
are used in UNIX systems to notify a process that a particular event has occurred.
signal handler
A — is used to process signals
Signal is generated by particular event
Signal is delivered to a process
Signal is handled by one of two signal handlers:
default
user-defined
default handler
Every signal has — that kernel runs when handling signa
User-defined signal handler
— can override default
For single-threaded, signal delivered to process
multi-threaded
Where should a signal be delivered for —?
Deliver the signal to the thread to which the signal applies
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the process
Thread Cancellation
Terminating a thread before it has finished
target thread
Thread to be canceled is —
Asynchronous cancellation
terminates the target thread immediately
Deferred cancellation
allows the target thread to periodically check if it should be cancelled
Pthread code
to create and cancel a thread
Invoking thread cancellation requests cancellation, but actual cancellation depends on thread state
If thread has cancellation disabled, cancellation remains pending until thread enables it
cancellation point
cleanup handler
Default type is deferred
Cancellation only occurs when thread reaches —
i.e., pthread_testcancel()
Then — is invoked
On Linux systems, thread cancellation is handled through signals
interrupt() method
Deferred cancellation uses the —, which sets the interrupted status of a thread.
A thread can then check to see if it has been interrupted
Thread-local storage (TLS)
— allows each thread to have its own copy of data
Useful when you do not have control over the thread creation process (i.e., when using a thread pool)
Different from local variables
Local variables visible only during single function invocation
TLS visible across function invocations
Similar to static data
TLS is unique to each thread
Scheduler Activations
Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application
lightweight process (LWP)
Typically use an intermediate data structure between user and kernel threads —
Appears to be a virtual processor on which
process can schedule user thread to run
Each LWP attached to kernel thread
How many LWPs to create?
upcalls
upcall handler
Scheduler activations provide — a communication mechanism from the kernel to the — in the thread library
This communication allows an application to maintain the correct number kernel threads
Windows Threads
Linux Threads
Operating System Examples 2
Windows API
— primary API for Windows applications
Implements the one-to-one mapping, kernel-level
thread
Each — contains
A thread id
Register set representing state of processor
Separate user and kernel stacks for when thread runs in user mode or kernel mode
Private data storage area used by run-time libraries and dynamic link libraries (DLLs)
context
The register set, stacks, and private storage area are known as the — of the thread
ETHREAD (executive thread block)
includes pointer to process to which thread belongs and to KTHREAD, in kernel space
KTHREAD (kernel thread block)
scheduling and synchronization info, kernel-mode stack, pointer to TEB, in kernel space
TEB (thread environment block)
thread id, user-mode stack, thread-local storage, in user space
tasks
Linux refers to them as — rather than threads
clone() system call
Thread creation is done through —
it allows a child task to share the address space of the parent task (process)
Flags control behavior
struct task_struct
points to process data structures (shared or unique)