Process management: Threads
Authors: A. Silberschatz, P. Baer Galvin and G. Gagne ©2003
Introduce the notion of a thread as a fundamental unit of CPU utilization.
Discuss APIs for the Java thread library.
What is the difference between concurrency and parallelism?
What are threads, how are they used, and what are their benefits?
What is the lifecycle of a thread in Java?
Only one task or subtask (job, process) runs at a time.
Must complete before another task starts.
Multiple tasks appear to run simultaneously.
Utilizes CPU time-slicing.
Involves process states where one task runs and another waits.
Multiple tasks run simultaneously on multiple CPUs/cores.
Requires hardware support for true parallelism.
Tasks executed one after the other, e.g., Task 1 runs then Task 2.
Tasks interleave execution, e.g., Task 1 might run, then Task 2 when Task 1 is waiting.
Tasks executed simultaneously on different CPUs, e.g., Task 1 and Task 2 running at the same time.
Combination of parallel and concurrent execution methods.
Concurrency: Tasks start, run, and complete in overlapping time.
Parallelism: Tasks run at the same time on multiple CPUs/cores.
Applications can be neither parallel nor concurrent (e.g., sequential processing).
Applications can also be parallel but not concurrent or concurrent but not parallel.
A thread is the smallest unit of CPU utilization.
Consists of: thread ID, program counter, register set, and stack.
Shares code section, data section, and resources with other threads in the same process.
Examples include web browsers and word processors that handle various tasks in separate threads.
Single-threaded: Only one thread of execution.
Multithreaded: Multiple threads allow concurrent execution within the same memory space.
Responsiveness: Multithreading allows programs to remain responsive even when part of them is blocked.
Resource Sharing: Threads share the memory and resources of their parent process.
Economy: Thread creation is more resource-efficient than process creation.
Scalability: Multithreading enhances performance in multiprocessor architectures by executing threads in parallel.
A single-threaded server handles one client request at a time.
Multithreaded servers create new threads for each client request, improving concurrency.
Development evolution led to systems with multiple CPU cores.
Multithreaded programming enhances concurrency and performance in multicore systems.
Describes potential performance gains from adding cores to applications with both serial and parallel components.
Formula: speedup ≤ 1 / (S + (1 - S) / N)
(Where S is the serial portion and N is the number of cores.)
Identifying tasks: Applications must be divided into independent, concurrent tasks.
Load balancing: Ensuring tasks are evenly distributed across processors.
Data splitting: Managing data dependencies among tasks to prevent errors.
Testing and debugging: Complexity increases significantly with parallel execution.
Ways to create threads:
By extending the Thread class.
By implementing the Runnable interface.
States: NEW, RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, TERMINATED.
Transitions between states depend on thread actions and other threads' statuses.
NEW: Thread created but not started.
RUNNABLE: Active in execution.
BLOCKED: Waiting for a monitor lock.
WAITING: Waiting indefinitely for a specific action.
TIMED_WAITING: Waiting with a specified timeout.
TERMINATED: Thread completed or terminated.
Multithreading maximizes CPU utilization through simultaneous execution of threads, sharing of memory, and lightweight processes.
Two methods to create threads:
Extending the Thread class (with built-in thread functionality).
Implementing the Runnable interface (allowing class extensions).
Override run() method in the class extending Thread.
Use the start() method to begin thread execution.
Understand states like NEW, RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, TERMINATED to manage threads efficiently.
Understanding threads' role, benefits, and lifecycle within Java's concurrency model is crucial for effective programming.