1/28
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Why we need explicit lock ? What is wrong with intrinsic lock ?
Implicit Lock:
Managed by JVM acquire and release the lock.
No advance features - No timed waiting, No Fairness, No Multiple condition,
1 WaitSet is preset, for thread coordination we need to use wait/notify
What is Rentrant Lock ??
ReentrantLock – Key Points
Explicit Locking:
Developer must explicitly lock()
and unlock()
(usually inside try/finally
).
Single-threaded Ownership:
Only one thread can hold the lock at a time.
Supports reentrancy (same thread can acquire it multiple times).
Advanced Features:
Timed Locking: tryLock()
and tryLock(timeout)
→ attempt acquisition with optional wait/timeout.
Fairness:
true
: FIFO order → prevents starvation (slower).
false
(default): may barge → higher throughput (possible starvation).
Multiple Condition Queues:
A single Lock
can have multiple Condition
s for fine-grained wait/notify sets.
Interruptible Waiting:
lockInterruptibly()
lets threads respond to interrupts while waiting.
Useful for graceful shutdowns, deadlock recovery, and handling locks held “too long.”
What is ReadWriteLock ??
Two Locks Instead of One:
Read Lock (Shared):
Multiple threads can hold it concurrently.
Acquired only no writers, or else it will wait
Safe because readers don’t modify the resource.
Write Lock (Exclusive):
Only one thread can hold it.
Acquired only when no readers (and no writers) are active.
The Benefit:
Huge scalability gains in read-heavy applications.
Ideal for caches, registries, in-memory data stores.
What is Conditional ??
An alternative to wait/notify
for thread coordination, paired with a Lock
.
Why it’s better than wait/notify
:
Multiple wait-sets: One Lock
can create multiple Condition
s, each with its own wait queue (models different predicates cleanly).
Targeted wake-ups: signal()
/signalAll()
wake only threads waiting on that Condition
, not every waiter on a monitor.
Do we need Lock and condition ?
You only need to implement
Lock + Condition
for custom thread coordination. In most cases, these patterns are already provided by Java’s standard concurrency utilities such as ArrayBlockingQueue, LinkedBlockingQueue, Semaphore, and CountDownLatch
What is AbstractQueuedSynchronizer ??
AQS is a framework in java.util.concurrent.locks
for building synchronizers. It manages state + a FIFO wait queue, defers acquire/release logic to subclasses like ReentrantLock
, Semaphore
, CountDownLatch
, and ReentrantReadWriteLock
.
What is Synchronizer ?
A synchronizer is a concurrency utility that coordinates thread execution by making threads wait for each other or for specific conditions.
Examples: CountDownLatch, Semaphore, CyclicBarrier, Phaser.
They focus on ordering and coordination rather than direct mutual exclusion.
By managing when threads can proceed, they help prevent race conditions and ensure data consistency.
What is Countdown Latch ? Key API ? Characteristics ? UseCase
Concept
A synchronization aid that lets one or more threads wait until a set of operations complete in other threads.
The Starting Gun: Runners (threads) wait for the gun (await()). The referee (main thread) counts down ("3, 2, 1") and fires the gun countDown() to 0).
CountDownLatch is a one-time gate: threads wait on await()
, and when the count hits zero via countDown()
, all waiting threads are released.
Key API
CountDownLatch(int count)
→ Initialize with a count.
void await()
→ Wait until count = 0.
boolean await(timeout, unit)
→ Wait with timeout.
void countDown()
→ Decrement count.
long getCount()
→ Inspect remaining count (debug only).
Characteristics
One-time use → cannot be reset (use CyclicBarrier / Phaser if reuse is needed).
Flexible waiting → any number of threads can wait, and any number can count down.
Once open, always open → after count = 0, all future await()
calls pass immediately.
✅ Use Cases
Main thread waits for workers: e.g., start 5 threads, wait for all to finish before proceeding.
Simultaneous start signal: multiple threads wait until all are ready, then proceed together.
Service dependencies: ensure required services are initialized before continuing.
⚠ Pitfalls
Not reusable → once opened, cannot close again.
Count mismatch → if you forget a countDown()
, threads may block forever.
Debugging → getCount()
is only informational, don’t use it for logic.
CyclicBarrier ?? API ?? Characteristics ??
Concept
A synchronization barrier where a fixed number of threads (parties) must all call await()
before any can proceed.
Think of it as: “Wait until everyone arrives, then we all move forward together.”
“CyclicBarrier is a reusable barrier that makes N threads wait at a rendezvous point until all arrive, optionally running a barrier action before releasing them.”
Key API
CyclicBarrier(int parties)
→ barrier for N threads.
CyclicBarrier(int parties, Runnable barrierAction)
→ optional action executed once per trip, by the last thread to arrive.
int await()
→ block until all threads arrive; returns arrival index (0 for the last thread, others decreasing).
int await(timeout, unit)
→ wait with timeout.
void reset()
→ reset barrier; waiting threads get BrokenBarrierException
.
Characteristics
Reusable (cyclic) → after all threads cross, barrier resets for next cycle.
Barrier action → useful for aggregation, merging results, logging, etc. Runs in the last arriving thread.
Arrival index → can let threads know their “order of arrival.”
✅ Use Cases
Parallel processing stages → split task among threads, synchronize at the end of each stage.
Multi-phase algorithms → e.g., simulation steps, where all threads must complete one step before starting the next.
Data merging → barrier action can combine partial results before proceeding.
⚠ Pitfalls
Fixed parties → number of threads must match barrier size exactly, or risk deadlock.
Barrier broken → if one thread times out, fails, or resets → others get BrokenBarrierException
.
Barrier action risk → if action throws exception → all threads see BrokenBarrierException
.
Semaphore ?? API ?? Chars ?? Use Case ?? Pitfalls
🔑 Semaphore – Permit-based Access Control Concept
A synchronization aid that controls access to a shared resource through permits.
Threads must acquire() a permit before proceeding, and release() it when done.
If no permits are available, the thread blocks until one is released.
Key API
Semaphore(int permits)
→ create with given number of permits.
Semaphore(int permits, boolean fair)
→ with fairness (FIFO ordering of waiting threads).
void acquire()
/ acquire(n)
→ block until permits available.
boolean tryAcquire()
→ non-blocking attempt.
void release()
/ release(n)
→ return permits.
Characteristics
Counting vs. Binary:
Counting Semaphore → multiple permits (resource pool).
Binary Semaphore → 1 permit (like a mutex, but no ownership tracking).
Flexible use: a single thread can acquire multiple permits, or release permits acquired earlier.
Fairness option: ensures waiting threads get permits in order (but may reduce throughput).
✅ Use Cases
Resource Pooling → database connections, thread pools, network sockets.
Rate limiting / throttling → allow only N threads/ops at once.
Mutual exclusion (binary) → enforce exclusive access (lighter than Lock
in some cases).
⚠ Pitfalls
Permit leaks → forgetting release()
(always use in finally
).
Deadlock: All permits eventually acquired but never release
Not reentrant → a thread that acquires permits must release them explicitly.
Fairness overhead → enabling fairness may reduce performance.
👉 Interview one-liner:
“Semaphore uses permits to control concurrent access to resources. Threads acquire permits before entry, release after use; useful for resource pooling or throttling.”
Synchronizer: Key usage
Key Usage:
Use a CountDownLatch for a simple "wait-for-events" one-time scenario.
Use a CyclicBarrier for a fixed-size party of threads that need to rendezvous repeatedly.
Use a Semaphore to control and throttle access to a limited pool of resources.
Use a Phaser for the most complex, multi-phase problems where the number of coordinating participants may change.
Semaphore vs Lock
Lock / synchronized → Ownership + Mutual Exclusion
Ensures mutual exclusion → only 1 thread enters critical section.
Ownership enforced → only the acquiring thread can release.
Designed for exclusive access (1:1 only).
Semaphore → No Ownership + Concurrency Control
More general → allows N threads to access resource at once.
🚫 No ownership → any thread may release()
(dangerous if misused).
Ideal for resource pooling, throttling, rate limiting.
Binary semaphore can mimic a lock, but lacks ownership guarantee.
👉 Summary
Use Lock for strict mutual exclusion (safe, ownership enforced).
Use Semaphore when you need to control concurrent access to a pool of resources.
⚡ Memory hook:
Lock = “My key, my lock” (ownership).
Semaphore = “Tickets for N people” (permits, no ownership).
What is Legacy Synchronized Collection ?? How it works ?? Limitation ??
How They Work
Wraps collection methods with synchronized keyword.
One intrinsic lock (the collection itself) guards all operations.
Limitations
Coarse-grained locking → only one thread at a time, even for reads.
Scalability issues under high contention (lock bottleneck).
Iteration not thread-safe → must manually synchronize during iteration.
Legacy API; not fail-safe iterators.
Use Case Today
Rarely used.
Acceptable only in low-concurrency or backward compatibility scenarios.
Modern alternative: Concurrent Collections (ConcurrentHashMap
, CopyOnWriteArrayList
, etc.) for better scalability.
Why we need to use ConcurrentHashMap ??
Why It Exists
HashMap
→ ❌ not thread-safe (corrupts under concurrent access).
Collections.synchronizedMap()
→ ✅ thread-safe, but single global lock = bottleneck.
ConcurrentHashMap
→ ✅ thread-safe, high-performance map with fine-grained locking, atomic methods for solving race condition, and weakly consistent iterators.
What are the key features of concurrent HashMap and when to use ??
Key Features
Fine-grained Locking (Lock Striping → CAS since Java 8)
Concurrent Iteration: weakly consistent (snapshot-like).
Atomic Read-Modify-Write Methods 🚀
Solve race conditions (check-then-act). Two threads can overwrite each other and cause data inconsistency.
Examples:
putIfAbsent(k, v)
→ insert only if absent.
computeIfAbsent(k, f)
→ lazy init values (⭐ very common).
replace(k, oldV, newV)
→ conditional replace.
merge(k, v, fn)
→ aggregate safely (perfect for counters.
When to Use
Default choice for shared maps in multi-threaded apps.
Ideal for caches, registries, counters, configuration maps.
What is CopyOnWriteArrayList/Set ??
CopyOnWriteArrayList/Set are thread-safe collections optimized for read-heavy scenarios. Writes copy the whole array, but reads are fast, lock-free, and safe even during iteration (snapshot semantics).
Why not just use new Thread()
instead of ExecutorService?
Limitations of new Thread()
High Overhead 🚨
Creating a thread per task = expensive (creation + teardown).
For short-lived tasks, thread creation cost > task execution time.
No Thread Reuse
Each task needs a fresh thread.
Wastes memory + CPU context switches.
No Task Queuing / Backpressure
No built-in mechanism to queue tasks.
No way to limit concurrency automatically.
Must implement queueing + throttling manually.
No Easy Result Handling
Hard to get return values from threads.
Requires manual Future
/Callable
management if not using Executor framework.
No Lifecycle Management
No standard way to monitor tasks.
No support for graceful shutdown or controlled termination.
“Using new Thread()
for every task is inefficient and unmanageable. ExecutorService provides thread pooling, queuing, result handling, and lifecycle management — making it the standard choice for concurrent task execution in Java.”
Why ExecutorService is Better ??
What It Is
Introduced in Java 5 (java.util.concurrent
).
A framework that decouples task submission from execution management.
Backed by ThreadPoolExecutor → reuses threads instead of creating one per task.
Core Benefits
Thread pooling → reuse threads, minimize overhead.
Queuing & scheduling → built-in task management, prevents overload.
Futures & Callables → easy return values, error handling.
Graceful shutdown → shutdown()
, awaitTermination()
.
Scalability → built-in thread pool strategies (FixedThreadPool
, CachedThreadPool
, ScheduledThreadPool
).
Monitoring → getActiveCount()
, getQueue().size()
, etc.
What are different Thread Pooling types in ExecutorService ?
Core Idea : Backed by ThreadPoolExecutor
.
Reuses threads → avoids expensive create/tear-down.
Provides flexible pooling strategies depending on workload.
The Executors factory class provides common pool configurations:
1. FixedThreadPool
Fixed number of threads (core = max
).
Thread busy → Task will wait in unbounded LinkedBlockingQueue.
✅ Good for stable workloads, controlling max concurrency.
⚠ Risk: Queue grows unbounded → possible OOM.
2. CachedThreadPool
Unbounded threads (max = Integer.MAX_VALUE
).
Uses a SynchronousQueue which doesn't hold tasks. Instead, it hands off tasks to waiting threads or creates a new thread if none are available.
Idle threads die after 60s.
✅ Best for many short-lived async tasks with unpredictable load.
⚠ Risk: High load → unbounded thread creation can lead to system resource exhaustion
3. ScheduledThreadPool
Runs tasks after delay or periodically.
Uses a DelayedWorkQueue which holds tasks until their scheduled execution time. → Problem: Unbounded size and memory issue.
✅ Use for cron-like jobs (e.g., heartbeat, cache refresh).
⚠ Risk: Too many scheduled tasks → memory growth.
4. ForkJoinPool
Designed for divide-and-conquer (recursive tasks).
Uses work-stealing → idle threads steal work from busy ones.
Default backing pool for parallel streams (ForkJoinPool.commonPool()
).
✅ Best for CPU-bound parallel tasks (sorting, recursive algorithms)
✅ In production → always configure ThreadPoolExecutor
directly
Why Use ThreadPoolExecutor
Instead of Executors Factory?
Factory methods (newFixedThreadPool
, newCachedThreadPool
) often use unsafe defaults:
Unbounded queues (risk: OOM).
Unlimited threads (risk: thread explosion).
ThreadPoolExecutor
gives full control over sizing, queuing, thread creation, and rejection handling.
ThreadPoolExecutor - more control and change the configuration as needed.
ThreadPoolExecutor lets you configure core/max threads, queue type, thread factory, and rejection policy. In production, use bounded queues + CallerRunsPolicy to avoid OOM and uncontrolled thread growth.
How does ExecutorService
improve over Executor
?
1. Executor (Root Interface)
Method: void execute(Runnable command)
Purpose: Decouples task submission from execution.
Limitations:
No lifecycle control.
No support for returning results.
2. ExecutorService (extends Executor
)
Adds:
Lifecycle management: shutdown()
, shutdownNow()
, awaitTermination()
.
Result handling: Future<T> submit(Callable<T>)
.
Batch methods: invokeAll()
, invokeAny()
.
👉 Main production interface for managing thread pools.
ScheduledExecutorService: Extends ExecutorService, adds scheduling.
Use case: cron jobs, periodic health checks, cache refresh tasks.
What are the thread pools life cylce ??
ExecutorService manages thread pools with lifecycle states (RUNNING → SHUTDOWN → TERMINATED). Always use shutdown()
for graceful exit, awaitTermination()
to wait, and shutdownNow()
as a last resort. For tasks, prefer submit(Callable)
or batch methods (invokeAll
, invokeAny
) over execute
to get results and handle exceptions.
Lifecycle States
RUNNING → accepts tasks.
SHUTDOWN → no new tasks, finish queued ones.
TERMINATED → all tasks complete, threads stopped.
Shutdown APIs
shutdown()
→ graceful, completes existing tasks.
awaitTermination(timeout, unit)
→ wait for shutdown.
shutdownNow()
→ forceful, interrupts running tasks, clears queue.
isShutdown()
→ true if shutdown initiated.
isTerminated()
→ true if fully stopped.
👉 Best Practice: shutdown()
→ awaitTermination()
→ fallback to shutdownNow()
.
Task Submission
execute(Runnable)
→ fire-and-forget, no result.
submit(Runnable)
→ returns Future<?>
(result = null).
submit(Runnable, T result)
→ predefined result in Future<T>
.
submit(Callable<T>)
→ returns Future<T>
, supports checked exceptions.
invokeAll(Collection<Callable>)
→ run all, block until complete (or timeout).
invokeAny(Collection<Callable>)
→ return result of first successful task, cancels others.
What is Callable?
Represents a task that can run asynchronously.
Unlike Runnable
, it:
Returns a result (V call()
).
Can throw checked exceptions.
Best Practices
Keep tasks small, focused, single-responsibility.
Handle exceptions properly inside call()
.
Avoid long blocking calls unless designed for it.
Use with Future
or CompletableFuture
to retrieve results.
Feature | Runnable ( | Callable ( |
---|---|---|
Return value | ❌ void | ✅ returns a result |
Exceptions | ❌ only unchecked | ✅ can throw checked |
Integration |
|
|
What is Future ??
Future is a "promise" that you'll get a result later.
is a placeholder for the result of an async computation
Returned when you submit a Callable
/Runnable
to an ExecutorService
.
Lets you check status, block for result, or cancel task.
Core Methods
V get()
→ blocks until result is available.
V get(timeout, unit)
→ blocks with timeout.
boolean isDone()
→ true if task is completed (success, failure, or cancelled).
boolean cancel(boolean mayInterruptIfRunning)
→ try to cancel task.
true
= interrupt if running.
false
= cancel only if not started.
boolean isCancelled()
→ true if task was cancelled.
Limitations (Why CompletableFuture was added later)
get()
is blocking → no non-blocking callbacks.
No chaining of tasks (can’t compose pipelines).
Can’t combine multiple Futures easily.
Best Practices
Use get(timeout)
instead of indefinite get()
.
Always handle InterruptedException
and ExecutionException
.
For async composition → prefer CompletableFuture in modern Java.
What It Is CompletableFuture ??
What It Is
Introduced in Java 8.
An enhanced Future
with non-blocking, async pipelines, chaining, combining, and error handling.
Can work with or without ExecutorService (defaults to ForkJoinPool).
Why Better than Future?
✅ Chaining → build pipelines (thenApply
, thenAccept
, thenRun
).
✅ Combining → merge results from multiple tasks (thenCombine
, allOf
, anyOf
).
✅ Exception handling → recover gracefully (exceptionally
, handle
).
✅ Non-blocking → callback-style, no need to block with get()
.
✅ Manually Completable → can create an empty CF and complete it later.
What are the Atomic Classes ??
1. What They Are
Provide lock-free, thread-safe operations on single variables.
Implemented using CAS (Compare-And-Swap) at the JVM/hardware level.
✅ Avoids synchronized
/ explicit locks → better performance under contention.
2. Core Classes
AtomicInteger
/ AtomicLong
→ counters.
AtomicBoolean
→ thread-safe flag.
AtomicReference<T>
→ safely hold and update object references.
(Advanced) LongAdder
/ LongAccumulator
→ better scaling for counters under high contention (Java 8+).
3. Key Methods (AtomicInteger example)
get()
→ current value.
set(value)
→ assign value.
incrementAndGet()
→ pre-increment (++i
).
getAndIncrement()
→ post-increment (i++
).
decrementAndGet()
, getAndDecrement()
.
addAndGet(delta)
, getAndAdd(delta)
.
compareAndSet(expected, update)
→ CAS update if value matches.
4. Use Cases
Counters → requests served, active sessions, unique IDs.
Flags → shutdown signals, on/off switches.
Shared references → safely swap objects in multi-threaded code.
Lock-free data structures → stacks, queues (via AtomicReference
).
5. Best Practices
Use atomic classes for simple shared variables.
For complex multi-variable updates, prefer locks or higher-level concurrency utilities.
For very high contention counters → use LongAdder
instead of AtomicInteger
.
✅ Interview One-Liner
Atomic classes provide lock-free, thread-safe operations on single variables using CAS. They’re ideal for counters, flags, and shared references without the overhead of locks.
Atomic Classes vs Locks
Aspect | Atomic Classes | Locks / synchronized |
---|---|---|
Scope | Single variable updates | Multiple variables / critical section |
Mechanism | Lock-free (CAS) | Blocking (mutual exclusion) |
Performance | Faster, lightweight under contention | Heavier, context-switch overhead |
Fairness | No fairness, may spin/retry | Can be fair/unfair ( |
Use Case | Counters, flags, atomic refs | Complex updates across multiple fields |
Use Atomic classes for simple, single-variable concurrency (counters, flags). Use Locks when multiple variables must be updated atomically.
Design a Thread-Safe System (Interview Answer)
Phase 1:
First Line of defense: Design Immutable objects whenever possible - they are automatically thread-safe.
Statelessness: Prefer stateless components over stateful ones
If Stateful is required, then controlled access to mutable state using the right concurrency utilities using thread-safe collections and atomic variables
"For complex transactions, I'd implement fine-grained locking with clear lock ordering to prevent deadlocks."
"I'd configure bounded thread pools with backpressure to prevent resource exhaustion."
Optimize scalability: reduce lock contention, use non-blocking structures, and configure bounded thread pools with backpressure.
"Finally, I'd add timeouts, cancellation support, and comprehensive monitoring for production resilience."
Reliability: always use timeouts, cancellation, and graceful shutdown.
Observability: name threads, log errors, and expose metrics for monitoring
💡 Key Differentiators to Mention
"I prefer composition over manual synchronization - using ConcurrentHashMap
instead of synchronizing a regular HashMap
."
"I always consider the trade-offs - read/write locks for read-heavy workloads, lock striping for high contention."
"I design for graceful degradation - using circuit breakers and bulkheads when external dependencies fail."
How do you choose locks and synchronization ??
Scenario | Preferred Tool | Why |
---|---|---|
Mutual exclusion |
| Clear intent, reentrancy |
Read-heavy, write-rarely |
| Better concurrency |
Resource pooling |
| Multiple permits |
Coordination |
| Simple point-to-point |