CodeRevisited Keep Learning.. Cheers!

ReadWriteLock in Java

The following post demonstrates ReadWriteLock semantics in java


Readers-Writers problem in one of the classic synchronization problems in computer science.

Suppose, there is a shared resource and many threads, some of them for reading the data from shared resources and some of them for writing data. our aim is to design a solution with the following constrains:

  • No thread may access the shared resource for reading or writing while another process is in the act of writing to it
  • No reader shall be kept waiting if the share is currently opened for reading

Addition constraint based on writer’s preference

No writer, once added to the queue, shall be kept waiting longer than absolutely necessary

How to implement

One of the traditional ways of implementations is to protect shared resource with built in synchronization. All access to shared resource requires an appropriate lock to be acquired. With built in synchronization, only one thread that holds the lock have access to shared resource.

But if one reader acquired lock and is accessing shared resource, It is not possible for another reader to acquire lock and access the resource. Hence, Traditional synchronization block/methods are not an option here.


Java 5.0 introduces ReadWriteLock that solves this problem. ReadWriteLock maintain a pair of locks, one is associated for reading and another for writing. Read Java doc here.

The following code demonstrates how to use ReadWriteLock for achieving ConcurrentHashMap functionality.

Code for testing

Thread Pool in Java

This post will talk about high level overview of Thread pools in Java and how they can be created. It also shows one sample implementation (not production quality) to demonstrate Thread pool in java.

What is Thread Pool?

It’s a pool of worker threads with life cycle as follows: 1. Get a new task to execute 2. Execute it 3. Go back to waiting for next task

Why Thread Pools?

In many server applications, we may want to process each client request in parallel. For that matter, we can choose traditional approach of creating one thread per request.

Disadvantage of one thread per task approach

  • The overhead of creating a new thread for each request is significant. Server that processing requests can spend more time and consume more system resources in creating and destroying threads than it would processing actual client requests.
  • Creating too many threads in one JVM can cause the system to run out of memory or thrash due to excessive memory consumption.

For example, the following sample code has a producer task to put an integer to queue and a consumer task to take one integer from queue. when running in loop, each time code creates a new thread to perform task. It creates 600 threads for short lived tasks.

Num of threads created when i ran above program in my machine:

Num of threads without pool

To prevent resource thrashing, server applications need some means of limiting how many requests are being processed at any given time. A thread pool offers a solution to both the problem of thread life-cycle overhead and the problem of resource thrashing.

How to Create a Thread pool

  • Create n threads. Call them as workers.
  • For each worker, Implement run method with two constraints 1. Wait for a task on a queue 2. Execute the task and go back to waiting state.
  • Expose addTask method that adds a task to that task Queue.

Code for sample Thread pool in java

Modified code for producerConsumer to use Thread pool

In the following code, Instead of creating a new thread each time, we submit a new task to the created thread pool.

Num of threads created when i ran above program in my machine:

Num of threads with pool


Above Thread pool implementation is just for illustrative purpose. Please look at java.util.concurrent package of JDK 1.5 and above for more sophisticated Thread Pools implementations.

Atomic variables

This post will demonstrate how to code wait-free, lock-free programming using Atomic variable classes that were added to JDK1.5.

The traditional way to coordinate access to shared fields in the Java language is to use synchronization, ensuring that all access to shared fields is done holding the appropriate lock. With synchronization, you are assured that whichever thread holds the lock that protects a given set of variables will have exclusive access to those variables, and changes to those variables will become visible to other threads when they subsequently acquire the lock.

Before JDK1.5, if we want to build a thread safe counter with get(), increment(), decrement() methods, each method needs to be synchronized to ensure that no updates are lost, and that all threads see the most recent value of the counter.

Example code

But What’s wrong with traditional approach

To execute any method, each thread has to acquire lock on object first. If the lock is heavily contended (threads frequently ask to acquire the lock when it is already held by another thread), throughput can suffer, as contended synchronization can be quite expensive.

Another problem with lock-based algorithms is that if a thread holding a lock is delayed (due to a page fault, scheduling delay, or other unexpected delay), then no thread requiring that lock may make progress.

Atomic Variables

In JDK5.0, a set of toolkit classes are introduced (java.util.concurrent.atomic) to support wait-free, lock-free programming The atomic variable classes can be thought of as a generalization of volatile variables, extending the concept of volatile variables to support atomic conditional compare-and-set updates. Reads and writes of atomic variables have the same memory semantics as read and write access to volatile variables.Operations on atomic variables get turned into the hardware primitives that the platform provides for concurrent access, such as compare-and-set.

For more information read package description here

Example code using AtomicInteger

Memory leaks in Java

A memory leak occurs when memory acquired by a program for execution is never freed-up to be used by other programs and applications.

Many java programmers believe that they don’t have to worry about allocating and freeing up memory. You simply create objects, and Java takes care of removing them when they are no longer needed by the application through a mechanism known as garbage collection. By saying that, programmers assume that memory leaks are taken care by java programming language. But that’s the case entirely.

What is garbage collector’s role?

The job of the garbage collector is to find objects that are no longer be accessed or referenced by an application and to remove them. The garbage collector starts at the root nodes, classes that persist throughout the life of a Java application, and sweeps through all of the nodes that are referenced. As it traverses the nodes, it keeps track of which objects are actively being referenced. Any classes that are no longer being referenced are then eligible to be garbage collected. The memory resources used by these objects can be returned to the Java virtual machine (JVM) when the objects are deleted.

So it is evident that unused objects are automatially garbage collected. However, the key point to remember is that an object is only counted as being unused when it is no longer referenced If an object reference is unintentionally retained, not only is that object excluded from garbage collection, but so too are any objects referenced by that object, and so on. Even if only a few object references are unintentionally retained, many, many objects may be prevented from being garbage collected, with potentially large effects on performance.

What causes memory leaks in Java?

Obsolete object references

An obsolete reference is simply a reference that will never be dereferenced again. As I mentioned above, if any intentional referernces retained for unused objects, garbage collectors fails to recognize those objects as unused.

Following is the example code that causes memory leaks because of obsolete object references.

Test class that pushes 10000 integers and pop them. At the end of the these operations, we expect all Integer objects gets destroyed. But when i check heap profile with visual VM, all 10000 Integer objects still there in Java heap.

###Heap dump from visualVM Heap dump from visualVM

Easy way to fix for this sort of problem is to null out references once they become obsolete. In the case of our Stack class, the reference to an item becomes obsolete as soon as it’s popped off the stack. The corrected version of the pop method looks like this

public E pop() {
    if (isEmpty())
        throw new RuntimeException("Stack underflow");
    E item = array[--N];
    array[N] = null;
    return item;

Heap dump from VisualVM Heap dump from visualVM

Good practices:

Programmers should not be obsessed with nulling out object references. Nulling out object references should be the exception rather than the norm. The best way to eliminate an obsolete reference is to let the variable that contained the reference fall out of scope. This occurs naturally if you define each variable in the narrowest possible scope

When should we null out a reference?

In case of statck, it is maintaining it’s own memory for storing elements. When array elements fell out of scope, we just need to let the garbage collector knows this fact by nulling out those references. whenever a class manages its own memory, the programmer should be alert for memory leaks. Whenever an element is freed, any object references contained in the element should be nulled out.

There are other sources of memory leaks. I will talk about them in next post.


What is singleton?

A singleton is simply a class that is instantiated exactly once

Objective of singleton?

There should be only one instance allowed for a class

There should allow global point of access to that single instance

Why singletons are required?

Singletons typically represent a system component that is intrinsically unique, such as Java Runtime, the window manager or file system Singletons are used to store data that it is used/updated across multiple components. The data in one component is usually important to another component, so everything is managed in one central object.

Ways to implement

There are 2 ways to implement singletons. In Both ways, We suppress the constructor and don’t allow even a single instance for the class. But we declare a static member to hold the sole instance of our singleton class

Early Initialization

At the time of loading the class, instance gets created. JVM guarantees that the instance will be created before any thread access the static member. getInstance() method simply returns the instance that was already created. If your application always creates and uses singleton instance or the overhead of creation and runtime aspects of the singleton are not onerous, this early initialization is preferred.

Lazy Initialization

By Creating a static factory method, sole instance gets created. The getInstance() method gives us a way to instantiate the class and also to return an instance of it. In this scenario, we need special attention when multiple threads are accessing getInstance() method.

Synchronizing getInstance() Method

It works perfectly fine in multithreaded scenario. But there is a performance trade off. If you carefully analyze, you realize that synchronization is only needed during initialization of singleton instance, to prevent creating another instance of Singleton. All other invocations determine that instance is non-null and return it. Multiple threads can safely execute concurrently on all invocations except the first. However, because the method is synchronized, you pay the cost of synchronization for every invocation of the method, even though it is only required on the first invocation.

Double-checked locking

With double checked locking, we check if instance is created. If not, then we synchronize creating the object instance.

Disadvantage of double checking

This solution will not work in Java 1.4 or earlier. Many JVMs in Java version 1.4 and earlier contains implementation of the volatile keyword that allow improper synchronization of double checked locking.

Another version of Thread-safe Lazy Initialization

Special Mentions in above two implementations


To make a singleton class serializable, it is not sufficient merely to add implements Serializable to its declaration. To maintain the singleton guarantee, you have to declare all instance fields transient and provide a readResolve method. Otherwise, each time a serialized instance is deserialized, a new instance will be created readResolve works only when all fields are transient.


Preferred way is to not to implement Cloneable interface since why do we need another copy of singleton object.

Enum Singleton

Enum Singletons are relatively new concept and in practice from Java 5 onwards after introduction of Enum as keyword.


easy to write when compared to lazy inititialization approach Enum Singletons handle serialization by default Creation of enum instance is thread safe.

A single-element enum type is the best way to implement a singleton.