Skip to content

Interview Questions

This comprehensive guide covers common and advanced Java concurrency interview questions, with detailed answers and code examples suitable for senior Java engineers.


1. What is the difference between a process and a thread?

Section titled “1. What is the difference between a process and a thread?”

Answer: A process is an independent program with its own memory space, while a thread is a lightweight execution unit within a process.

Key differences:

  • Memory: Processes have separate memory spaces; threads share the same memory space within a process
  • Communication: Inter-process communication is more complex than inter-thread communication
  • Creation/Termination: Creating/terminating processes is more resource-intensive than threads
  • Context switching: Switching between threads is faster than switching between processes
  • Isolation: Processes are isolated; a crash in one process doesn’t affect others, while a thread crash may affect the entire process

Example:

// Creating a new process
ProcessBuilder pb = new ProcessBuilder("java", "-jar", "app.jar");
Process process = pb.start();
// Creating a new thread
Thread thread = new Thread(() -> {
System.out.println("Running in a new thread");
});
thread.start();

2. What are the different ways to create a thread in Java?

Section titled “2. What are the different ways to create a thread in Java?”

Answer: There are four main ways to create a thread in Java:

1. Extending the Thread class:

class MyThread extends Thread {
public void run() {
System.out.println("Thread running: " + Thread.currentThread().getName());
}
}
// Usage
MyThread thread = new MyThread();
thread.start();

2. Implementing the Runnable interface:

class MyRunnable implements Runnable {
public void run() {
System.out.println("Thread running: " + Thread.currentThread().getName());
}
}
// Usage
Thread thread = new Thread(new MyRunnable());
thread.start();

3. Using lambda expressions (Java 8+):

Thread thread = new Thread(() -> {
System.out.println("Thread running: " + Thread.currentThread().getName());
});
thread.start();

4. Using the Executor framework:

ExecutorService executor = Executors.newSingleThreadExecutor();
executor.submit(() -> {
System.out.println("Thread running: " + Thread.currentThread().getName());
});
executor.shutdown();

Best practice: Implementing Runnable is generally preferred over extending Thread as it:

  • Doesn’t waste inheritance
  • Allows the task to be executed in different contexts (Thread, ExecutorService, etc.)
  • Separates the task (what) from the execution mechanism (how)

Answer: A Java thread goes through the following states during its lifecycle:

  1. NEW: Thread is created but not yet started
  2. RUNNABLE: Thread is ready to run and waiting for CPU allocation
  3. BLOCKED: Thread is waiting to acquire a monitor lock
  4. WAITING: Thread is waiting indefinitely for another thread to perform a particular action
  5. TIMED_WAITING: Thread is waiting for another thread for a specified period
  6. TERMINATED: Thread has completed execution or was stopped

Code example to demonstrate thread states:

Thread thread = new Thread(() -> {
try {
// Thread moves to TIMED_WAITING state
Thread.sleep(1000);
synchronized (this) {
// Thread might move to BLOCKED state if lock is not available
// Once lock is acquired, thread is RUNNABLE
}
// Thread moves to WAITING state
Object lock = new Object();
synchronized (lock) {
lock.wait();
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
// Thread is in NEW state
System.out.println("State: " + thread.getState()); // NEW
thread.start();
// Thread is now RUNNABLE
System.out.println("State: " + thread.getState()); // RUNNABLE
Thread.sleep(1500);
// Thread might be in TIMED_WAITING state
System.out.println("State: " + thread.getState()); // TIMED_WAITING
// Eventually thread will be TERMINATED
thread.join();
System.out.println("State: " + thread.getState()); // TERMINATED

4. What is thread safety? Why is it important?

Section titled “4. What is thread safety? Why is it important?”

Answer: Thread safety refers to the property of code that ensures it functions correctly during simultaneous execution by multiple threads. A class or method is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the threads by the runtime environment.

Importance:

  • Prevents data corruption and race conditions
  • Ensures program correctness in concurrent environments
  • Avoids hard-to-debug issues that may appear intermittently
  • Critical for applications that handle multiple users or tasks simultaneously

Thread safety can be achieved through:

  1. Immutability: Using immutable objects
  2. Synchronization: Using synchronized keyword or locks
  3. Atomic operations: Using atomic classes like AtomicInteger
  4. Thread confinement: Restricting access to data to a single thread
  5. Thread-local storage: Using ThreadLocal variables
  6. Concurrent collections: Using thread-safe collections

Example of thread-safe vs. non-thread-safe code:

// Not thread-safe
class Counter {
private int count = 0;
public void increment() {
count++; // Not atomic operation
}
public int getCount() {
return count;
}
}
// Thread-safe version
class ThreadSafeCounter {
private final AtomicInteger count = new AtomicInteger(0);
public void increment() {
count.incrementAndGet(); // Atomic operation
}
public int getCount() {
return count.get();
}
}

5. What is the difference between start() and run() methods in Thread?

Section titled “5. What is the difference between start() and run() methods in Thread?”

Answer:

  • start(): Creates a new thread and causes this thread to begin execution. The JVM calls the run() method of this thread.
  • run(): Contains the code that constitutes the new thread. If called directly, it runs in the current thread.

Key differences:

  1. Thread creation: start() creates a new thread; run() does not
  2. Execution context: start() executes in a new thread; run() executes in the current thread
  3. Multiple invocations: start() can be called only once per Thread object; run() can be called multiple times

Example:

Thread thread = new Thread(() -> {
System.out.println("Current thread: " + Thread.currentThread().getName());
});
// Creates and starts a new thread
thread.start();
// Output: Current thread: Thread-0
// Runs in the current thread (main)
thread.run();
// Output: Current thread: main
// Calling start() again throws IllegalThreadStateException
// thread.start(); // Error

6. What is synchronization in Java? Why is it needed?

Section titled “6. What is synchronization in Java? Why is it needed?”

Answer: Synchronization in Java is a mechanism that ensures that only one thread can access a resource at a time. It’s implemented using the synchronized keyword, which can be applied to methods or blocks of code.

Why it’s needed:

  1. Thread safety: Prevents data corruption when multiple threads access shared resources
  2. Visibility: Ensures changes made by one thread are visible to other threads
  3. Ordering: Establishes happens-before relationships between threads
  4. Atomicity: Ensures that operations are completed as a single, indivisible unit

Types of synchronization:

  1. Method synchronization: Locks the entire method
  2. Block synchronization: Locks only a specific block of code
  3. Static synchronization: Locks on the class object

Example:

class Counter {
private int count = 0;
// Method synchronization
public synchronized void increment() {
count++;
}
// Block synchronization
public void incrementWithBlock() {
synchronized(this) {
count++;
}
// Other non-synchronized code
}
// Static synchronization
public static synchronized void staticMethod() {
// Synchronized on Counter.class
}
public int getCount() {
synchronized(this) {
return count;
}
}
}

Important notes:

  • Synchronization introduces overhead and can reduce performance
  • Excessive synchronization can lead to deadlocks
  • Synchronization should be used judiciously and only when necessary

7. What is the difference between synchronized method and synchronized block?

Section titled “7. What is the difference between synchronized method and synchronized block?”

Answer: Both synchronized methods and blocks use intrinsic locks to ensure thread safety, but they differ in scope and flexibility.

Synchronized Method:

public synchronized void method() {
// Entire method is synchronized on 'this'
}
public static synchronized void staticMethod() {
// Synchronized on the Class object
}

Synchronized Block:

public void method() {
// Non-synchronized code
synchronized(this) {
// Synchronized code
}
// More non-synchronized code
}
public void methodWithDifferentLock() {
Object lock = new Object();
synchronized(lock) {
// Synchronized on the lock object
}
}

Key differences:

  1. Granularity: Blocks provide finer-grained control over what code is synchronized
  2. Lock object: Methods always lock on this (or the Class object for static methods), while blocks can lock on any object
  3. Performance: Blocks can be more efficient by minimizing the synchronized code
  4. Flexibility: Blocks allow using different lock objects for different parts of the code

Best practices:

  • Use synchronized blocks instead of methods when possible
  • Keep synchronized blocks as small as possible
  • Avoid synchronizing on publicly accessible objects
  • Consider using explicit locks (ReentrantLock, etc.) for more advanced scenarios

Answer: The volatile keyword in Java is used to indicate that a variable’s value may be modified by different threads simultaneously. It provides two key guarantees:

  1. Visibility: Changes made to a volatile variable by one thread are immediately visible to all other threads
  2. Ordering: Prevents instruction reordering optimizations around volatile accesses

What volatile does NOT provide:

  • It does not make compound operations (like i++) atomic
  • It does not provide mutual exclusion
  • It does not create a critical section

Example:

public class SharedFlag {
private volatile boolean flag = false;
public void setFlag() {
flag = true; // Write is immediately visible to other threads
}
public boolean isSet() {
return flag; // Always reads the most recent value
}
// Incorrect usage - not atomic
private volatile int counter = 0;
public void increment() {
counter++; // Not atomic despite volatile
}
}

Appropriate uses:

  • Status flags that are read by multiple threads
  • Double-checked locking pattern (Java 5+)
  • Publishing immutable objects without synchronization

Example of double-checked locking:

class Singleton {
private static volatile Singleton instance;
private Singleton() {}
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}

9. What is a race condition? How can it be prevented?

Section titled “9. What is a race condition? How can it be prevented?”

Answer: A race condition occurs when the behavior of a program depends on the relative timing or interleaving of multiple threads or processes. It happens when threads operate on shared data concurrently, and the final outcome depends on the order of execution.

Example of a race condition:

class Counter {
private int count = 0;
public void increment() {
count++; // Read-modify-write operation is not atomic
}
public int getCount() {
return count;
}
}
// Usage that causes race condition
Counter counter = new Counter();
Thread t1 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
counter.increment();
}
});
Thread t2 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
counter.increment();
}
});
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println(counter.getCount()); // May not be 2000

Prevention techniques:

  1. Synchronization: Use synchronized methods or blocks

    public synchronized void increment() {
    count++;
    }
  2. Atomic variables: Use atomic classes from java.util.concurrent.atomic

    private AtomicInteger count = new AtomicInteger(0);
    public void increment() {
    count.incrementAndGet();
    }
  3. Locks: Use explicit locks from java.util.concurrent.locks

    private final Lock lock = new ReentrantLock();
    public void increment() {
    lock.lock();
    try {
    count++;
    } finally {
    lock.unlock();
    }
    }
  4. Thread confinement: Restrict access to data to a single thread

    ThreadLocal<Integer> localCounter = ThreadLocal.withInitial(() -> 0);
  5. Immutable objects: Use immutable data structures that can’t be modified after creation

  6. Non-blocking algorithms: Use compare-and-swap (CAS) operations

    public void incrementWithCAS() {
    int current;
    int next;
    do {
    current = count;
    next = current + 1;
    } while (!atomicCount.compareAndSet(current, next));
    }

10. What is a deadlock? How can it be prevented?

Section titled “10. What is a deadlock? How can it be prevented?”

Answer: A deadlock occurs when two or more threads are blocked forever, each waiting for the other to release a lock. Deadlocks typically occur when multiple threads need the same locks but obtain them in different orders.

Classic deadlock example:

Object lock1 = new Object();
Object lock2 = new Object();
Thread t1 = new Thread(() -> {
synchronized(lock1) {
System.out.println("Thread 1: Holding lock 1...");
try { Thread.sleep(100); } catch (InterruptedException e) {}
System.out.println("Thread 1: Waiting for lock 2...");
synchronized(lock2) {
System.out.println("Thread 1: Holding lock 1 & 2...");
}
}
});
Thread t2 = new Thread(() -> {
synchronized(lock2) {
System.out.println("Thread 2: Holding lock 2...");
try { Thread.sleep(100); } catch (InterruptedException e) {}
System.out.println("Thread 2: Waiting for lock 1...");
synchronized(lock1) {
System.out.println("Thread 2: Holding lock 1 & 2...");
}
}
});
t1.start();
t2.start();

Prevention techniques:

  1. Lock ordering: Always acquire locks in a fixed, global order

    // Both threads use the same lock order
    synchronized(lock1) {
    synchronized(lock2) {
    // Work with both resources
    }
    }
  2. Lock timeouts: Use timed lock attempts

    Lock lock1 = new ReentrantLock();
    Lock lock2 = new ReentrantLock();
    boolean acquired = false;
    try {
    acquired = lock1.tryLock(1, TimeUnit.SECONDS);
    if (acquired) {
    try {
    acquired = lock2.tryLock(1, TimeUnit.SECONDS);
    if (acquired) {
    // Work with both locks
    }
    } finally {
    if (acquired) lock2.unlock();
    }
    }
    } catch (InterruptedException e) {
    Thread.currentThread().interrupt();
    } finally {
    if (acquired) lock1.unlock();
    }
  3. Deadlock detection: Use thread dumps or management APIs to detect and recover from deadlocks

    ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
    long[] deadlockedThreads = threadMXBean.findDeadlockedThreads();
    if (deadlockedThreads != null) {
    // Handle deadlock
    }
  4. Lock hierarchy: Design a proper lock hierarchy and document it

  5. Avoid nested locks: Minimize the use of nested locks when possible

  6. Use higher-level concurrency utilities: Use concurrent collections, atomic variables, etc., which are designed to avoid deadlocks

11. What is the difference between synchronized and ReentrantLock?

Section titled “11. What is the difference between synchronized and ReentrantLock?”

Answer: synchronized is a built-in language feature for locking, while ReentrantLock is a class in the java.util.concurrent.locks package that offers more advanced features.

Key differences:

FeaturesynchronizedReentrantLock
SyntaxLanguage constructAPI-based
Timed lock acquisitionNoYes (tryLock(time))
Interruptible lockingNoYes (lockInterruptibly())
Non-blocking attemptsNoYes (tryLock())
Fairness policyNoYes (optional constructor parameter)
Multiple conditionsNoYes (via newCondition())
Lock state inspectionNoYes (methods like isHeldByCurrentThread())
Explicit unlockingAutomaticManual (must call unlock() in finally block)

Example of ReentrantLock:

private final ReentrantLock lock = new ReentrantLock(true); // Fair lock
public void method() {
lock.lock();
try {
// Critical section
} finally {
lock.unlock(); // Must be in finally block
}
}
public void methodWithTimeout() throws InterruptedException {
if (lock.tryLock(1, TimeUnit.SECONDS)) {
try {
// Critical section
} finally {
lock.unlock();
}
} else {
// Could not acquire lock within timeout
}
}

Best practices:

  • Use synchronized for simple locking needs
  • Use ReentrantLock when you need advanced features
  • Always release locks in a finally block
  • Consider fairness requirements

12. What is a ReadWriteLock? When would you use it?

Section titled “12. What is a ReadWriteLock? When would you use it?”

Answer: A ReadWriteLock maintains a pair of locks: one for read-only operations and one for write operations. Multiple threads can hold the read lock simultaneously, but the write lock is exclusive.

Key characteristics:

  • Multiple readers can access simultaneously
  • Writers have exclusive access
  • Writers are usually prioritized over readers

Example:

public class ThreadSafeCache {
private final Map<String, Object> cache = new HashMap<>();
private final ReadWriteLock lock = new ReentrantReadWriteLock();
private final Lock readLock = lock.readLock();
private final Lock writeLock = lock.writeLock();
public Object get(String key) {
readLock.lock();
try {
return cache.get(key);
} finally {
readLock.unlock();
}
}
public void put(String key, Object value) {
writeLock.lock();
try {
cache.put(key, value);
} finally {
writeLock.unlock();
}
}
public boolean containsKey(String key) {
readLock.lock();
try {
return cache.containsKey(key);
} finally {
readLock.unlock();
}
}
public Object remove(String key) {
writeLock.lock();
try {
return cache.remove(key);
} finally {
writeLock.unlock();
}
}
}

When to use it:

  • Read-heavy workloads with infrequent updates
  • When read operations don’t modify shared data
  • When you want to improve concurrency for read operations

Types:

  • ReentrantReadWriteLock: Standard implementation
  • StampedLock (Java 8+): Provides optimistic reading and better throughput

13. What are atomic variables in Java? How do they work?

Section titled “13. What are atomic variables in Java? How do they work?”

Answer: Atomic variables are classes in the java.util.concurrent.atomic package that support lock-free, thread-safe operations on single variables. They use low-level atomic machine instructions like compare-and-swap (CAS) to ensure atomicity without locking.

Common atomic classes:

  • AtomicInteger, AtomicLong, AtomicBoolean
  • AtomicReference<V>
  • AtomicIntegerArray, AtomicLongArray, AtomicReferenceArray<V>
  • AtomicStampedReference<V>, AtomicMarkableReference<V>

How they work:

  1. Read the current value
  2. Compute a new value based on the current value
  3. Use CAS to update only if the current value hasn’t changed
  4. If the value has changed, retry from step 1

Example:

public class AtomicCounter {
private final AtomicInteger count = new AtomicInteger(0);
public void increment() {
count.incrementAndGet(); // Atomic operation
}
public void decrement() {
count.decrementAndGet(); // Atomic operation
}
public int get() {
return count.get();
}
public void update() {
// CAS loop pattern
int current;
do {
current = count.get();
} while (!count.compareAndSet(current, current + 2));
}
}

Benefits:

  • Better performance than locks for single variables
  • Immune to deadlocks and livelocks
  • Support for atomic compound actions

Limitations:

  • Only work on single variables or fields
  • Not suitable for coordinating multiple related operations
  • Can suffer from ABA problems (solved by AtomicStampedReference)

14. What is the ABA problem in concurrent programming?

Section titled “14. What is the ABA problem in concurrent programming?”

Answer: The ABA problem occurs in concurrent algorithms, particularly lock-free ones, when a thread reads a value A, another thread changes it to B and then back to A, and the first thread doesn’t detect the change.

Example scenario:

  1. Thread 1 reads value A
  2. Thread 1 is paused
  3. Thread 2 changes value from A to B
  4. Thread 2 changes value from B back to A
  5. Thread 1 resumes and sees value A, assuming nothing has changed
  6. Thread 1 performs an operation that may be incorrect because it missed the intermediate state

Example with a stack:

// Simplified lock-free stack with ABA problem
class LockFreeStack<T> {
private AtomicReference<Node<T>> top = new AtomicReference<>(null);
public void push(T item) {
Node<T> newHead = new Node<>(item);
Node<T> oldHead;
do {
oldHead = top.get();
newHead.next = oldHead;
} while (!top.compareAndSet(oldHead, newHead));
}
public T pop() {
Node<T> oldHead;
Node<T> newHead;
do {
oldHead = top.get();
if (oldHead == null) return null;
newHead = oldHead.next;
} while (!top.compareAndSet(oldHead, newHead));
return oldHead.item;
}
private static class Node<T> {
final T item;
Node<T> next;
Node(T item) {
this.item = item;
}
}
}

Solutions:

  1. AtomicStampedReference: Adds a stamp (version number) that changes with each update

    AtomicStampedReference<Integer> asr = new AtomicStampedReference<>(100, 0);
    int[] stamp = new int[1];
    int initialValue = asr.get(stamp);
    int initialStamp = stamp[0];
    // Update only if value and stamp haven't changed
    boolean success = asr.compareAndSet(initialValue, newValue, initialStamp, initialStamp + 1);
  2. AtomicMarkableReference: Simpler version with a boolean mark instead of an integer stamp

  3. Hazard pointers: Track which references are currently being accessed

  4. Memory reclamation techniques: Ensure that objects aren’t reused too quickly

15. What is a ThreadLocal variable? When would you use it?

Section titled “15. What is a ThreadLocal variable? When would you use it?”

Answer: ThreadLocal provides thread-local variables, which are variables that are local to each thread. Each thread has its own, independently initialized copy of the variable, and changes made by one thread don’t affect other threads.

Example:

public class ThreadLocalExample {
// Each thread has its own copy
private static final ThreadLocal<SimpleDateFormat> dateFormat =
ThreadLocal.withInitial(() -> new SimpleDateFormat("yyyy-MM-dd"));
public String formatDate(Date date) {
// Uses the thread's own instance
return dateFormat.get().format(date);
}
// Proper cleanup in frameworks/containers
public void cleanup() {
dateFormat.remove();
}
}

When to use it:

  1. Thread safety: When you need thread-safe access to non-thread-safe objects (like SimpleDateFormat)
  2. Per-thread context: Storing user IDs, transaction IDs, or other context for the current thread
  3. Reducing contention: When sharing would cause high contention
  4. Performance: Avoiding synchronization overhead for thread-specific data

Important considerations:

  • Memory usage increases with the number of threads
  • Memory leaks can occur if not properly cleaned up, especially in application servers
  • Use remove() when the thread is done with the variable
  • Consider using ThreadLocal.withInitial() (Java 8+) for cleaner initialization

16. What are concurrent collections in Java? How are they different from synchronized collections?

Section titled “16. What are concurrent collections in Java? How are they different from synchronized collections?”

Answer: Concurrent collections are thread-safe collections in the java.util.concurrent package designed for high concurrency. Synchronized collections are older thread-safe wrappers created using Collections.synchronizedXxx methods.

Key differences:

AspectSynchronized CollectionsConcurrent Collections
ImplementationWrapper with synchronized methodsSpecialized algorithms (lock striping, CAS)
LockingSingle lock for the entire collectionFine-grained locking or lock-free
IteratorsFail-fast (throw ConcurrentModificationException)Weakly consistent (may reflect some changes)
PerformanceLower throughput under contentionHigher throughput under contention
Blocking operationsNo built-in supportSome collections offer blocking operations

Common concurrent collections:

  • ConcurrentHashMap: High-concurrency map implementation
  • CopyOnWriteArrayList: Thread-safe variant of ArrayList for read-heavy workloads
  • CopyOnWriteArraySet: Set implementation backed by CopyOnWriteArrayList
  • ConcurrentSkipListMap: Concurrent NavigableMap implementation
  • ConcurrentSkipListSet: Concurrent NavigableSet implementation
  • ConcurrentLinkedQueue: Non-blocking queue
  • ConcurrentLinkedDeque: Non-blocking double-ended queue
  • BlockingQueue implementations: LinkedBlockingQueue, ArrayBlockingQueue, etc.

Example:

// Synchronized collection
Map<String, String> syncMap = Collections.synchronizedMap(new HashMap<>());
// Must manually synchronize for compound operations
synchronized (syncMap) {
if (!syncMap.containsKey("key")) {
syncMap.put("key", "value");
}
}
// Concurrent collection
ConcurrentMap<String, String> concMap = new ConcurrentHashMap<>();
// Atomic compound operations built-in
concMap.putIfAbsent("key", "value");
// Iteration differs
List<String> syncList = Collections.synchronizedList(new ArrayList<>());
synchronized (syncList) { // Must synchronize iteration
for (String item : syncList) {
// Safe iteration
}
}
List<String> concList = new CopyOnWriteArrayList<>();
for (String item : concList) { // No synchronization needed
// Safe iteration, but may not reflect concurrent modifications
}

Best practices:

  • Prefer concurrent collections over synchronized collections
  • Choose the right collection for your access pattern
  • Be aware of the iterator consistency guarantees
  • Use the built-in atomic operations when available

17. How does ConcurrentHashMap work internally? How is it different from HashMap and Hashtable?

Section titled “17. How does ConcurrentHashMap work internally? How is it different from HashMap and Hashtable?”

Answer: ConcurrentHashMap is a thread-safe hash table designed for high concurrency. It achieves this through techniques like lock striping (segmentation) in earlier versions and a more sophisticated approach in Java 8+.

Internal working:

  • Java 7 and earlier: Uses segments (essentially mini-hashtables) with a separate lock per segment
  • Java 8+: Uses a combination of CAS operations for updates and synchronized blocks on individual hash table bins

Key differences:

FeatureHashMapHashtableConcurrentHashMap
Thread safetyNot thread-safeThread-safeThread-safe
Locking mechanismNoneSingle lock on entire tableFine-grained locking
Null keys/valuesAllows null keys and valuesDoesn’t allow null keys or valuesDoesn’t allow null keys or values
Iterator behaviorFail-fastFail-fastWeakly consistent
Performance under contentionN/A (not thread-safe)Poor (single lock)Good (fine-grained locking)
Atomic compound operationsNoneNoneputIfAbsent, replace, etc.

Example:

// HashMap - not thread-safe
Map<String, Integer> hashMap = new HashMap<>();
// Hashtable - thread-safe but poor concurrency
Map<String, Integer> hashtable = new Hashtable<>();
// ConcurrentHashMap - thread-safe with good concurrency
ConcurrentMap<String, Integer> concurrentMap = new ConcurrentHashMap<>();
// Atomic operations in ConcurrentHashMap
concurrentMap.putIfAbsent("key", 1);
concurrentMap.replace("key", 1, 2);
concurrentMap.remove("key", 2);
// Bulk operations
concurrentMap.forEach(8, (k, v) -> System.out.println(k + ": " + v));

Performance considerations:

  • ConcurrentHashMap has slightly higher overhead for uncontended access
  • ConcurrentHashMap performs much better under contention
  • The default concurrency level is automatically determined based on the number of CPUs

18. What are blocking queues in Java? Give examples of different types.

Section titled “18. What are blocking queues in Java? Give examples of different types.”

Answer: Blocking queues are thread-safe queues that support operations that wait (block) for the queue to become non-empty when retrieving elements, or wait for space to become available when adding elements. They are part of the java.util.concurrent package and implement the BlockingQueue interface.

Key operations:

  • put(e): Adds an element, waiting if necessary for space to become available
  • take(): Retrieves and removes an element, waiting if necessary for an element to become available
  • offer(e, time, unit): Adds an element, waiting up to the specified time if necessary
  • poll(time, unit): Retrieves and removes an element, waiting up to the specified time if necessary

Types of blocking queues:

  1. ArrayBlockingQueue: Bounded queue backed by an array

    // Fixed capacity, optional fairness policy
    BlockingQueue<String> queue = new ArrayBlockingQueue<>(100, true);
  2. LinkedBlockingQueue: Optionally bounded queue backed by linked nodes

    // Unbounded
    BlockingQueue<String> unbounded = new LinkedBlockingQueue<>();
    // Bounded
    BlockingQueue<String> bounded = new LinkedBlockingQueue<>(100);
  3. PriorityBlockingQueue: Unbounded priority queue

    // Elements dequeued according to their natural order
    BlockingQueue<Integer> priorityQueue = new PriorityBlockingQueue<>();
    // Or with a custom comparator
    BlockingQueue<Task> taskQueue = new PriorityBlockingQueue<>(11,
    Comparator.comparing(Task::getPriority));
  4. DelayQueue: Queue where elements can only be taken when their delay has expired

    BlockingQueue<DelayedTask> delayQueue = new DelayQueue<>();
    // Elements must implement Delayed interface
    delayQueue.put(new DelayedTask("Task", 5, TimeUnit.SECONDS));
  5. SynchronousQueue: Queue with no internal capacity

    // Each put must wait for a take, and vice versa
    BlockingQueue<String> syncQueue = new SynchronousQueue<>();
  6. LinkedTransferQueue: Unbounded queue that allows producers to wait for consumers

    TransferQueue<String> transferQueue = new LinkedTransferQueue<>();
    // Normal operations plus transfer methods
    transferQueue.transfer("item"); // Waits until received by a consumer
  7. LinkedBlockingDeque: Deque version of LinkedBlockingQueue

    BlockingDeque<String> blockingDeque = new LinkedBlockingDeque<>();
    // Supports operations at both ends
    blockingDeque.putFirst("first");
    blockingDeque.putLast("last");

Example: Producer-Consumer pattern

class Producer implements Runnable {
private final BlockingQueue<String> queue;
Producer(BlockingQueue<String> queue) {
this.queue = queue;
}
@Override
public void run() {
try {
for (int i = 0; i < 100; i++) {
String item = "Item " + i;
queue.put(item); // Blocks if queue is full
System.out.println("Produced: " + item);
Thread.sleep(100);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
class Consumer implements Runnable {
private final BlockingQueue<String> queue;
Consumer(BlockingQueue<String> queue) {
this.queue = queue;
}
@Override
public void run() {
try {
while (true) {
String item = queue.take(); // Blocks if queue is empty
System.out.println("Consumed: " + item);
Thread.sleep(200);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
// Usage
BlockingQueue<String> queue = new ArrayBlockingQueue<>(10);
new Thread(new Producer(queue)).start();
new Thread(new Consumer(queue)).start();

19. What is the difference between CopyOnWriteArrayList and ArrayList?

Section titled “19. What is the difference between CopyOnWriteArrayList and ArrayList?”

Answer: CopyOnWriteArrayList is a thread-safe variant of ArrayList in which all mutative operations (add, set, remove, etc.) are implemented by creating a fresh copy of the underlying array. It’s designed for cases where reads vastly outnumber writes.

Key differences:

FeatureArrayListCopyOnWriteArrayList
Thread safetyNot thread-safeThread-safe
ImplementationBacked by a resizable arrayCreates a new array copy for each modification
Iterator behaviorFail-fastSnapshot view (never throws ConcurrentModificationException)
Performance for readsGoodGood
Performance for writesGoodPoor (creates a new array copy)
Memory usageEfficientHigher (due to copying)

Example:

// ArrayList - not thread-safe
List<String> arrayList = new ArrayList<>();
// CopyOnWriteArrayList - thread-safe
List<String> cowList = new CopyOnWriteArrayList<>();
// Adding elements
cowList.add("A"); // Creates a new array
cowList.add("B"); // Creates another new array
// Safe iteration without external synchronization
for (String s : cowList) {
System.out.println(s);
}
// Iterator doesn't support modification
Iterator<String> it = cowList.iterator();
while (it.hasNext()) {
String s = it.next();
// it.remove(); // Throws UnsupportedOperationException
}

When to use CopyOnWriteArrayList:

  • Read-heavy, write-rare scenarios
  • When you need thread-safe iteration without synchronization
  • When you need to prevent ConcurrentModificationException
  • For event listener lists that are rarely modified but often iterated

When to avoid CopyOnWriteArrayList:

  • Write-heavy scenarios
  • Large lists with frequent modifications
  • When memory usage is a concern

20. What are concurrent synchronizers in Java? Explain CountDownLatch, CyclicBarrier, and Semaphore.

Section titled “20. What are concurrent synchronizers in Java? Explain CountDownLatch, CyclicBarrier, and Semaphore.”

Answer: Concurrent synchronizers are utility classes in the java.util.concurrent package that facilitate common forms of synchronization between threads.

1. CountDownLatch: A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.

Key characteristics:

  • Initialized with a count
  • countDown() decrements the count
  • await() blocks until count reaches zero
  • Cannot be reset once count reaches zero

Example:

public class CountDownLatchExample {
public static void main(String[] args) throws InterruptedException {
int workerCount = 5;
CountDownLatch startSignal = new CountDownLatch(1);
CountDownLatch doneSignal = new CountDownLatch(workerCount);
for (int i = 0; i < workerCount; i++) {
final int workerId = i;
new Thread(() -> {
try {
startSignal.await(); // Wait for start signal
System.out.println("Worker " + workerId + " started");
// Do work
Thread.sleep((long) (Math.random() * 1000));
System.out.println("Worker " + workerId + " finished");
doneSignal.countDown(); // Signal completion
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}).start();
}
// Start all workers simultaneously
System.out.println("Starting all workers");
startSignal.countDown();
// Wait for all workers to finish
doneSignal.await();
System.out.println("All workers finished");
}
}

Use cases:

  • Starting a group of threads simultaneously
  • Waiting for a group of threads to complete
  • Implementing a simple one-time gate

2. CyclicBarrier: A synchronization aid that allows a set of threads to wait for each other to reach a common barrier point.

Key characteristics:

  • Initialized with a party count (and optionally a runnable)
  • await() blocks until all parties have called await()
  • Can be reused after all parties have reached the barrier
  • Optional barrier action runs when barrier is tripped

Example:

public class CyclicBarrierExample {
public static void main(String[] args) {
int parties = 3;
int iterations = 3;
CyclicBarrier barrier = new CyclicBarrier(parties, () -> {
// This runs when all threads reach the barrier
System.out.println("All parties have reached the barrier!");
});
for (int i = 0; i < parties; i++) {
final int threadId = i;
new Thread(() -> {
try {
for (int j = 0; j < iterations; j++) {
System.out.println("Thread " + threadId + " preparing for iteration " + j);
Thread.sleep((long) (Math.random() * 1000));
System.out.println("Thread " + threadId + " waiting at barrier");
barrier.await(); // Wait for all parties
System.out.println("Thread " + threadId + " crossed the barrier");
}
} catch (InterruptedException | BrokenBarrierException e) {
Thread.currentThread().interrupt();
}
}).start();
}
}
}

Use cases:

  • Parallel computations where threads need to synchronize at certain points
  • Simulations where multiple entities need to be ready before proceeding
  • Multi-phase computations

3. Semaphore: A synchronization aid that controls access to a shared resource through the use of permits.

Key characteristics:

  • Initialized with a number of permits
  • acquire() obtains a permit, blocking if none available
  • release() returns a permit to the semaphore
  • Can be fair or unfair (default is unfair)

Example:

public class SemaphoreExample {
public static void main(String[] args) {
// Simulate a pool of 3 connections
int maxConnections = 3;
Semaphore semaphore = new Semaphore(maxConnections, true); // Fair semaphore
for (int i = 0; i < 10; i++) {
final int userId = i;
new Thread(() -> {
try {
System.out.println("User " + userId + " is waiting for a connection");
semaphore.acquire(); // Get permit
System.out.println("User " + userId + " acquired a connection");
// Simulate using the connection
Thread.sleep((long) (Math.random() * 2000));
System.out.println("User " + userId + " releasing connection");
semaphore.release(); // Release permit
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}).start();
}
}
}

Use cases:

  • Limiting concurrent access to a resource
  • Implementing bounded collections
  • Implementing producer-consumer with bounded buffer
  • Rate limiting

Other synchronizers:

  • Phaser: More flexible than CyclicBarrier and CountDownLatch, with dynamic party registration
  • Exchanger: Allows two threads to exchange objects at a synchronization point
  • StampedLock (Java 8+): A capability-based lock with optimistic reading

21. What is the Executor framework in Java? What are its advantages over directly using threads?

Section titled “21. What is the Executor framework in Java? What are its advantages over directly using threads?”

Answer: The Executor framework is a high-level API for launching and managing threads, introduced in Java 5. It separates thread creation and management from the rest of the application logic.

Key interfaces:

  • Executor: Simple interface with a single execute(Runnable) method
  • ExecutorService: Extended interface with lifecycle and task submission methods
  • ScheduledExecutorService: Adds scheduling capabilities

Advantages over direct thread usage:

  1. Thread pooling: Reuses threads to reduce the overhead of thread creation
  2. Task queuing: Manages tasks when all threads are busy
  3. Lifecycle management: Provides methods to shut down gracefully
  4. Task submission: Supports both Runnable and Callable tasks
  5. Future results: Returns Future objects for tracking task completion
  6. Scheduling: Supports delayed and periodic task execution
  7. Thread factory: Allows customization of thread creation
  8. Rejection policies: Controls behavior when the executor is saturated

Example:

// Simple thread creation
for (int i = 0; i < 100; i++) {
Thread thread = new Thread(() -> {
// Task logic
});
thread.start();
}
// Using Executor framework
ExecutorService executor = Executors.newFixedThreadPool(10);
for (int i = 0; i < 100; i++) {
executor.submit(() -> {
// Task logic
});
}
executor.shutdown();

Best practices:

  • Always shut down executors explicitly
  • Use appropriate pool sizes based on workload characteristics
  • Consider using different executors for different types of tasks
  • Handle rejected executions appropriately

22. What are the different types of thread pools in Java? When would you use each?

Section titled “22. What are the different types of thread pools in Java? When would you use each?”

Answer: Java provides several pre-configured thread pool implementations through factory methods in the Executors class.

1. Fixed Thread Pool:

ExecutorService fixedPool = Executors.newFixedThreadPool(nThreads);
  • Creates a pool with a fixed number of threads
  • If all threads are busy, new tasks wait in an unbounded queue
  • Good for limiting resource usage and when you know the optimal thread count

2. Cached Thread Pool:

ExecutorService cachedPool = Executors.newCachedThreadPool();
  • Creates new threads as needed, reuses idle threads
  • Threads that remain idle for 60 seconds are terminated
  • Good for many short-lived tasks and when demand varies

3. Single Thread Executor:

ExecutorService singlePool = Executors.newSingleThreadExecutor();
  • Uses a single worker thread with an unbounded queue
  • Guarantees sequential execution of tasks
  • Good for tasks that must run sequentially

4. Scheduled Thread Pool:

ScheduledExecutorService scheduledPool = Executors.newScheduledThreadPool(corePoolSize);
  • Fixed-size pool that supports delayed and periodic task execution
  • Good for recurring tasks or tasks that need to start after a delay

5. Work-Stealing Pool (Java 8+):

ExecutorService workStealingPool = Executors.newWorkStealingPool();
  • Uses a ForkJoinPool with parallelism level equal to available processors
  • Employs work-stealing algorithm where idle threads steal tasks from busy threads
  • Good for computational tasks that can be broken down recursively

6. Custom ThreadPoolExecutor:

ThreadPoolExecutor customPool = new ThreadPoolExecutor(
corePoolSize,
maximumPoolSize,
keepAliveTime,
unit,
workQueue,
threadFactory,
rejectedExecutionHandler
);
  • Fully customizable thread pool
  • Good when you need precise control over pool behavior

When to use each:

  • Fixed: When you want to limit resource usage and have a stable number of threads
  • Cached: When you have many short-lived tasks and variable load
  • Single: When tasks must execute sequentially
  • Scheduled: When you need to run tasks on a schedule
  • Work-Stealing: For compute-intensive tasks that can be broken down
  • Custom: When you need fine-grained control over pool parameters

Warning about unbounded queues: Both newFixedThreadPool and newSingleThreadExecutor use unbounded queues which can lead to OutOfMemoryError if tasks are submitted faster than they can be processed.

23. How do you properly shut down an ExecutorService?

Section titled “23. How do you properly shut down an ExecutorService?”

Answer: Properly shutting down an ExecutorService involves rejecting new tasks, allowing already-submitted tasks to complete, and potentially interrupting tasks that are taking too long.

Basic shutdown:

executor.shutdown(); // Rejects new tasks, allows existing tasks to finish

Complete shutdown pattern:

ExecutorService executor = Executors.newFixedThreadPool(10);
try {
// Submit tasks
executor.submit(() -> { /* task */ });
} finally {
// Initiate orderly shutdown
executor.shutdown();
try {
// Wait for existing tasks to terminate
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
// Cancel currently executing tasks forcefully
executor.shutdownNow();
// Wait for tasks to respond to being cancelled
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
System.err.println("Executor did not terminate");
}
}
} catch (InterruptedException e) {
// (Re-)Cancel if current thread also interrupted
executor.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
}

Key methods:

  • shutdown(): Initiates orderly shutdown, rejects new tasks but executes already submitted tasks
  • shutdownNow(): Attempts to stop all actively executing tasks and returns a list of tasks that were awaiting execution
  • awaitTermination(long timeout, TimeUnit unit): Blocks until all tasks have completed, the timeout occurs, or the current thread is interrupted
  • isShutdown(): Returns true if shutdown has been initiated
  • isTerminated(): Returns true if all tasks have completed following shutdown

Best practices:

  • Always call shutdown() when you’re done with an executor
  • Use try-finally to ensure shutdown happens even if exceptions occur
  • Consider using a timeout with awaitTermination() to avoid hanging indefinitely
  • Preserve the interrupt status if awaitTermination() is interrupted
  • In web applications or long-running services, register a shutdown hook to ensure proper executor shutdown

24. What is the Fork/Join framework? How is it different from ExecutorService?

Section titled “24. What is the Fork/Join framework? How is it different from ExecutorService?”

Answer: The Fork/Join framework, introduced in Java 7, is a specialized implementation of the ExecutorService designed for tasks that can be broken down into smaller subtasks recursively (divide-and-conquer algorithm).

Key components:

  • ForkJoinPool: Specialized thread pool that uses work-stealing algorithm
  • ForkJoinTask: Abstract base class for tasks that run in a ForkJoinPool
  • RecursiveTask: ForkJoinTask that returns a result
  • RecursiveAction: ForkJoinTask that doesn’t return a result

How it works:

  1. A task is split into smaller subtasks (fork)
  2. Each subtask is executed in parallel
  3. Results of subtasks are combined (join)
  4. Idle worker threads steal tasks from busy threads’ queues

Example: Computing Fibonacci numbers:

public class FibonacciTask extends RecursiveTask<Integer> {
private final int n;
private static final int THRESHOLD = 10;
public FibonacciTask(int n) {
this.n = n;
}
@Override
protected Integer compute() {
if (n <= THRESHOLD) {
// Base case: compute directly
return computeDirectly();
}
// Split into subtasks
FibonacciTask f1 = new FibonacciTask(n - 1);
f1.fork(); // Submit subtask
FibonacciTask f2 = new FibonacciTask(n - 2);
int result = f2.compute() + f1.join(); // Compute f2 and wait for f1
return result;
}
private int computeDirectly() {
if (n <= 1) return n;
int a = 0, b = 1;
for (int i = 2; i <= n; i++) {
int c = a + b;
a = b;
b = c;
}
return b;
}
}
// Usage
ForkJoinPool pool = new ForkJoinPool();
int result = pool.invoke(new FibonacciTask(30));

Differences from standard ExecutorService:

FeatureStandard ExecutorServiceFork/Join Framework
Task typeIndependent tasksRecursive, divisible tasks
Work distributionTasks submitted externallyTasks create subtasks
Load balancingFixed assignmentWork stealing
Thread managementTypically fixed numberAutomatically adapts
Task coordinationVia Future.get()Via ForkJoinTask.join()
Blocking behaviorBlocks calling threadCan use ManagedBlocker

When to use Fork/Join:

  • Compute-intensive tasks (not I/O-bound tasks)
  • Tasks that can be divided into smaller subtasks
  • When you want to maximize CPU utilization
  • Examples: sorting large arrays, matrix multiplication, image processing

Best practices:

  • Make the base case threshold large enough to amortize the overhead
  • Avoid blocking operations in tasks
  • Use invokeAll() for multiple subtasks
  • Consider using Java 8+ parallel streams which use Fork/Join internally

CompletableFuture and Asynchronous Programming

Section titled “CompletableFuture and Asynchronous Programming”

25. What is the difference between Future and CompletableFuture?

Section titled “25. What is the difference between Future and CompletableFuture?”

Answer: Future (introduced in Java 5) represents the result of an asynchronous computation, while CompletableFuture (introduced in Java 8) extends Future with a rich set of methods for composing, combining, and handling asynchronous operations.

Key differences:

FeatureFutureCompletableFuture
Completion notificationMust poll with isDone() or block with get()Can register callbacks with thenApply(), thenAccept(), etc.
CompositionNot supportedSupports chaining with thenCompose()
CombinationNot supportedCan combine multiple futures with thenCombine(), allOf(), etc.
Exception handlingMust catch exceptions from get()Provides exceptionally(), handle(), etc.
Manual completionNot supportedCan complete manually with complete(), completeExceptionally()
CancellationBasic cancel() methodEnhanced with completeExceptionally() and cancel()

Example: Future vs CompletableFuture

// Using Future
ExecutorService executor = Executors.newFixedThreadPool(1);
Future<String> future = executor.submit(() -> {
Thread.sleep(1000);
return "Result";
});
// Blocking call
try {
String result = future.get(); // Blocks until completed
System.out.println(result);
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
executor.shutdown();
// Using CompletableFuture
CompletableFuture<String> cf = CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(1000);
return "Result";
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return "Error";
}
});
// Non-blocking with callbacks
cf.thenAccept(result -> System.out.println(result));
// Or transform the result
CompletableFuture<Integer> lengthFuture = cf.thenApply(String::length);
lengthFuture.thenAccept(length -> System.out.println("Length: " + length));

Benefits of CompletableFuture:

  • Non-blocking programming model
  • Rich API for composition and combination
  • Better exception handling
  • Can be completed manually
  • Works well with functional programming style

26. How do you handle exceptions in CompletableFuture?

Section titled “26. How do you handle exceptions in CompletableFuture?”

Answer: CompletableFuture provides several methods for handling exceptions in asynchronous computations.

1. Using exceptionally():

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
if (Math.random() < 0.5) {
throw new RuntimeException("Error occurred");
}
return "Success";
}).exceptionally(ex -> {
System.err.println("Exception: " + ex.getMessage());
return "Default value after error"; // Recovery value
});

2. Using handle():

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
if (Math.random() < 0.5) {
throw new RuntimeException("Error occurred");
}
return "Success";
}).handle((result, ex) -> {
if (ex != null) {
System.err.println("Exception: " + ex.getMessage());
return "Default value after error";
}
return result;
});

3. Using whenComplete():

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
if (Math.random() < 0.5) {
throw new RuntimeException("Error occurred");
}
return "Success";
}).whenComplete((result, ex) -> {
if (ex != null) {
System.err.println("Exception occurred: " + ex.getMessage());
}
});
// Note: whenComplete doesn't change the result or error

4. Manually completing with exception:

CompletableFuture<String> future = new CompletableFuture<>();
try {
// Some operation
future.complete("Result");
} catch (Exception e) {
future.completeExceptionally(e);
}

Key points:

  • exceptionally(): Handles exceptions and returns a recovery value
  • handle(): Handles both normal result and exception
  • whenComplete(): Performs an action when the future completes (normally or exceptionally)
  • completeExceptionally(): Completes the future with an exception

Best practices:

  • Place exception handlers as close as possible to the source of exceptions
  • Use exceptionally() for simple recovery
  • Use handle() when you need to process both success and failure cases
  • Use whenComplete() for logging or monitoring without changing the result
  • Consider using CompletableFuture.exceptionallyCompose() (Java 12+) for recovery with another asynchronous operation

27. How do you combine multiple CompletableFutures?

Section titled “27. How do you combine multiple CompletableFutures?”

Answer: CompletableFuture provides several methods for combining multiple asynchronous operations.

1. Combining two futures sequentially (thenCompose):

CompletableFuture<String> future1 = CompletableFuture.supplyAsync(() -> "Hello");
// Use the result of future1 to create future2
CompletableFuture<String> future2 = future1.thenCompose(result ->
CompletableFuture.supplyAsync(() -> result + " World"));
// Output: Hello World
future2.thenAccept(System.out::println);

2. Combining two independent futures (thenCombine):

CompletableFuture<String> future1 = CompletableFuture.supplyAsync(() -> "Hello");
CompletableFuture<String> future2 = CompletableFuture.supplyAsync(() -> "World");
// Combine results when both complete
CompletableFuture<String> combined = future1.thenCombine(future2,
(result1, result2) -> result1 + " " + result2);
// Output: Hello World
combined.thenAccept(System.out::println);

3. Waiting for all futures to complete (allOf):

List<String> urls = Arrays.asList("url1", "url2", "url3");
// Create a CompletableFuture for each URL
List<CompletableFuture<String>> futures = urls.stream()
.map(url -> CompletableFuture.supplyAsync(() -> fetchUrl(url)))
.collect(Collectors.toList());
// Wait for all to complete
CompletableFuture<Void> allDone = CompletableFuture.allOf(
futures.toArray(new CompletableFuture[0]));
// Process results when all complete
CompletableFuture<List<String>> results = allDone.thenApply(v ->
futures.stream()
.map(CompletableFuture::join) // Safe after allOf
.collect(Collectors.toList()));
results.thenAccept(list -> {
list.forEach(System.out::println);
});
// Helper method
private static String fetchUrl(String url) {
// Simulate HTTP request
try {
Thread.sleep(100 + new Random().nextInt(900));
return "Result from " + url;
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return "Error";
}
}

4. Completing when any future completes (anyOf):

CompletableFuture<String> future1 = CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(100);
return "Result from future1";
} catch (InterruptedException e) {
return "Error";
}
});
CompletableFuture<String> future2 = CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(50);
return "Result from future2";
} catch (InterruptedException e) {
return "Error";
}
});
// Complete when either completes
CompletableFuture<Object> anyResult = CompletableFuture.anyOf(future1, future2);
// Output: Result from future2 (completes faster)
anyResult.thenAccept(System.out::println);

5. Running multiple futures in parallel (runAfterBoth):

CompletableFuture<Void> future1 = CompletableFuture.runAsync(() -> {
// Task 1
});
CompletableFuture<Void> future2 = CompletableFuture.runAsync(() -> {
// Task 2
});
// Run after both complete
CompletableFuture<Void> afterBoth = future1.runAfterBoth(future2, () -> {
System.out.println("Both tasks completed");
});

Key methods:

  • thenCompose(): Sequential composition (flatMap equivalent)
  • thenCombine(): Parallel composition of two futures
  • allOf(): Waits for all futures to complete
  • anyOf(): Completes when any future completes
  • runAfterBoth(): Runs an action after two futures complete
  • applyToEither(): Applies a function to the result of whichever future completes first

28. How do you implement a timeout with CompletableFuture?

Section titled “28. How do you implement a timeout with CompletableFuture?”

Answer: There are several ways to implement timeouts with CompletableFuture.

1. Using orTimeout() (Java 9+):

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(2000); // Long-running task
return "Result";
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return "Interrupted";
}
}).orTimeout(1, TimeUnit.SECONDS); // Completes exceptionally after timeout
future.whenComplete((result, ex) -> {
if (ex != null) {
System.out.println("Timed out: " + ex.getMessage());
} else {
System.out.println(result);
}
});

2. Using completeOnTimeout() (Java 9+):

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(2000); // Long-running task
return "Result";
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return "Interrupted";
}
}).completeOnTimeout("Default after timeout", 1, TimeUnit.SECONDS);
future.thenAccept(System.out::println); // Prints "Default after timeout"

3. Using a separate timeout future (pre-Java 9):

<T> CompletableFuture<T> withTimeout(CompletableFuture<T> future, long timeout, TimeUnit unit) {
CompletableFuture<T> timeoutFuture = new CompletableFuture<>();
// Schedule a task to complete the timeoutFuture exceptionally after the timeout
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
scheduler.schedule(() -> {
timeoutFuture.completeExceptionally(
new TimeoutException("Timeout after " + timeout + " " + unit));
scheduler.shutdown();
}, timeout, unit);
// Return the future that completes first
return future.applyToEither(timeoutFuture, Function.identity());
}
// Usage
CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(2000);
return "Result";
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return "Interrupted";
}
});
CompletableFuture<String> withTimeout = withTimeout(future, 1, TimeUnit.SECONDS);
withTimeout.whenComplete((result, ex) -> {
if (ex != null) {
System.out.println("Error: " + ex.getMessage());
} else {
System.out.println(result);
}
});

4. Using a timeout future with anyOf (pre-Java 9):

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(2000);
return "Result";
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return "Interrupted";
}
});
CompletableFuture<String> timeout = new CompletableFuture<>();
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
scheduler.schedule(() -> {
timeout.complete("Timeout");
scheduler.shutdown();
}, 1, TimeUnit.SECONDS);
CompletableFuture.anyOf(future, timeout).thenAccept(result -> {
if ("Timeout".equals(result)) {
System.out.println("Operation timed out");
future.cancel(true); // Cancel the original future
} else {
System.out.println("Result: " + result);
}
});

Best practices:

  • Use orTimeout() or completeOnTimeout() in Java 9+ for simplicity
  • Always clean up resources (like scheduled executors) when using custom timeout implementations
  • Consider cancelling the original future when a timeout occurs
  • Be careful with thread interruption when cancelling futures
  • Choose appropriate timeout values based on the expected operation duration