Thursday, March 14, 2013

Problem with Java Concurrency - Java 4 and above

Java's present concurrency model ( Java 4 ) suffers from various weaknesses such as broken encapsulation and the inheritance anomaly.
Design criteria for a new concurrency model be defined as: it should comply with the underlying object-oriented model, be expressive, offer fault restriction, and be simple. Assessing the new model was important to evaluate how it met the design criteria. Moreover, it should be investigated whether the new concurrency model is stronger than previous version Java and offers better support for modelling concurrent problems and introducing parallelism. Hence, an assessment method, which assigns a numerical value to a concurrency model, is required


Some Problem Examples :

http://stackoverflow.com/questions/461896/what-is-the-most-frequent-concurrency-issue-youve-encountered-in-java

------------------------------- Excample - 1---------------------------------------------------------------------

Changes made by Thread_1 to a variable of another Thread ,Thread_2 is not guaranteed to be visible to Thread_2

class MyThread extends Thread {
  private boolean stop = false;

  public void run() {
    while(!stop) {
      doSomeWork();
    }
  }

  public void setStop() {
    this.stop = true;
  }
}
 
@ : As long as stop is not volatile or setStop is not synchronized
 this is not guaranteed to work. This mistake is especially devilish as 
in 99.999% it won't matter in practice as the reader thread will 
eventually see the change - but we don't know how soon he saw it. 
 
@ : that's because of the Java memory model. You should read about it, if you want to know it in detail (Java Concurrency in Practice by Brian Goetz explains it good e.g.). In short: unless you use memory synchronization keywords/constructs (like volatile, synchronized, AtomicXyz, but also when a Thread is finished) one Thread has NO guarantee whatsoever to see the changes made to any field done by a different thread


@Nitesh : Some Excepts from Java Concurrency in Practice )
 Sharing Variables without Synchronization. Don't Do this.
 
public class NoVisibility {
private static boolean ready;
private static int number;
private static class ReaderThread extends Thread {
public void run() {
while (!ready)
Thread.yield();
System.out.println(number);
}
}
public static void main(String[] args) {
new ReaderThread().start();
number = 42;
ready = true;
}
}


When the main thread writes first to the number and then to ready without synchronization, the reader could see those written in opposite order or Not At All !!

Locking is not just about Mutual Exclusion , its also about Memory Visibility. To ensure that all threads see the most updated values of shared mutable variable, the reader and writer threads must synchronize on a common lock
------------------------ Example - 2 ------------------------------------------------------------------------------
Until I took a class with Brian Goetz I didn't realize that the non-synchronized getter of a private field mutated through a synchronized setter is never guaranteed to return the updated value. Only when a variable is protected by synchronized block on both reads AND writes will you get the guarantee of the latest value of the variable.
public class SomeClass{
    private Integer thing = 1;

    public synchronized void setThing(Integer thing)
        this.thing = thing;
    }

    /**
     * This may return 1 forever and ever no matter what is set
     * because the read is not synched
     */
    public Integer getThing(){
        return thing;  
    }
}
@ : In the later JVM's (1.5 and forward, I think), the use of volatile will fix that as well
@ : Not necessarily. volatile gives you the latest value so it prevents the returning of 1 forever, but it does not provide locking. Its close, but not quite the same. 

------------------- Example - 2 ------------------------------------------------------------------------------
 @Nitesh : Synchronization and Inheritance

http://www.coderanch.com/t/232382/threads/java/Synchronization-case-Inheritance

Hi friends,
If class A inherits B and B has a sync. method foobar() and A doesn't override foobar(), then will foobar() inside A be also synchronized ?
Will the new datamembers that are declared in A be affected by the sync. lock of foobar() ?
Thanks in advance,
Gopal Shah
 


@: Yes and yes.
Remember that the lock is part of each instance of java.lang.Object. Every instance of any class has exactly one lock. Any synchronized (non-static) method, in any part of the class hierarchy of an instance, is accessing the same one lock.
By declaring foobar() as synchronized, you are saying that a thread must implicitly obtain the lock before entering foobar() and will implicitly release it when leaving foobar().
 

----------------------- Example - 2 ----------------------------------------------------------------------------

Not realising that the this in an inner class is not the this of the outer class. Typically in an anonymous inner class that implements Runnable. The root problem is that because synchronisation is part of all Objects there is effectively no static type checking. I've seen this at least twice on usenet, and it also appears in Brian Goetz'z Java Concurrency in Practice.
BGGA closures don't suffer from this as there is no this for the closure (this references the outer class). If you use non-this objects as locks then it gets around this problem and others.

@Nitesh : Object Escape
--------------------------- Example - 2 -------------------------------------------------------------------------


private static final String LOCK = "LOCK";  // use matching strings 
                                            // in two different libraries

public doSomestuff() {
   synchronized(LOCK) {
       this.work();
   }
}
At first glance, this looks like a pretty trivial synchronization example. However; because Strings are interned in Java, the literal string "LOCK" turns out to be the same instance of java.lang.String (even though they are declared completely disparately from each other.) The result is obviously bad.

@Jared: "until the string is interned" makes no sense. Strings don't magically "become" interned. String.intern() returns a different object, unless you already have the canonical instance of the specified String. Also, all literal strings and string-valued constant expressions are interned. Always. See the docs for String.intern() and §3.10.5 of the JLS
--------------------- Excample - 3 ------------------------------------------------------------------------

One classic problem is changing the object you're synchronizing on while synchronizing on it:
synchronized(foo) {
  foo = ...
} 
 
------------------- Example - 4 --------------------------------------------
 
Double-Checked Locking. By and large.
The paradigm, which I started learning the problems of when I was working at BEA, is that people will check a singleton in the following way:
public Class MySingleton {
  private static MySingleton s_instance;
  public static MySingleton getInstance() {
    if(s_instance == null) {
      synchronized(MySingleton.class) { s_instance = new MySingleton(); }
    }
    return s_instance;
  }
}
This never works, because another thread might have gotten into the synchronized block and s_instance is no longer null. So the natural change is then to make it:
  public static MySingleton getInstance() {
    if(s_instance == null) {
      synchronized(MySingleton.class) {
        if(s_instance == null) s_instance = new MySingleton();
      }
    }
    return s_instance;
  }
That doesn't work either, because the Java Memory Model doesn't support it. You need to declare s_instance as volatile to make it work, and even then it only works on Java 5.
People that aren't familiar with the intricacies of the Java Memory Model mess this up all the time.

------------------- Example - 5 -----------------------------------------------------------------------------
Not properly synchronizing on objects returned by Collections.synchronizedXXX(), especially during iteration or multiple operations:
Map<String, String> map = Collections.synchronizedMap 
(new HashMap<String, String>());

.....

if(!map.containsKey("foo"))
    map.put("foo", "bar");
That's wrong. It should be:
synchronized(map) {
    if(!map.containsKey("foo"))
        map.put("foo", "bar");
}
Or with a ConcurrentMap implementation:
map.putIfAbsent("foo", "bar");
@Nitesh : Here , though the single operations are synchornized bu the combined operation is not synchronized. So Synchronized Map does not solve the problem Need an extra lock to handle both check and put in atomic manner
----------------- Example - 6 ------------------------------------------------------------------------------------

@Nitesh :
What if 'wait()' has not notify ?? Thread would wait infinitely
Object.notify succeeds even if there is not one waiting ....
This might cause a buggy code to pass through as some Notifications may just escape
--------------- Example - 7 ----------------------------------------------------------------------------------
Unsafe Operation : Similar to Example - 5

 
It can be easy to think synchronized collections grant you more protection than they actually do, and forget to hold the lock between calls. If have seen this mistake a few times:
 List<String> l = Collections.synchronizedList(new ArrayList<String>());
 String[] s = l.toArray(new String[l.size()]);
For example, in the second line above, the toArray and size() methods are both thread safe in their own right, but the size() is evaluated separately from the toArray(), and the lock on the List is not held between these two calls. If you run this code with another thread concurrently removing items from the list, sooner or later you will end up with a new String[] returned which is larger than required to hold all the elements in the list, and has null values in the tail. It is easy to think that because the two method calls to the List occur in a single line of code this is somehow an atomic operation, but it is not

----------------- Examaple - 8 -------------------------------------------------------------------------

tarting a thread within the constructor of a class is problematic. If the class is extended, the thread can be started before subclass' constructor is executed.

-------------- Example - 9 ---------------------------------------------------------------------------

 Same as Exmaple - 2
Synchronizing on a string literal or constant defined by a string literal is (potentially) a problem as the string literal is interned and will be shared by anyone else in the JVM using the same string literal. I know this problem has come up in application servers and other "container" scenarios.
Example:
private static final String SOMETHING = "foo";

synchronized(SOMETHING) {
   //
}
In this case, anyone using the string "foo" to lock on is sharing the same lock
-------------------- Example - 10 ------------------------------------------------------------------
@Nitesh: Nice java puzzle

Unbalanced synchronization, particularly against Maps seems to be a fairly common problem. Many people believe that synchronizing on puts to a Map (not a ConcurrentMap, but say a HashMap) and not synchronizing on gets is sufficient. This however can lead to an infinite loop during re-hash.
The same problem (partial synchronization) can occur anywhere you have shared state with reads and writes however.

--------------------- Example - 11 ----------------------------------------------------------------------

he dumbest mistake I frequently make is forgetting to synchronize before calling notify() or wait() on an object.
Unlike most concurrency problems, isn't this one easy to find? At least you get an IllegalMonitorStateException here.

--------------------- Example - 12 ----------------------------------------------------------------------- 
 


Use of a global object such as a static variable for locking.
This leads to very bad performance because of contention. 




 ------------------ Example - 13 -------------------------------------------------------------------------

The most recent Concurrency-related bug I ran into was an object that in its constructor created an ExecutorService, but when the object was no longer referenced, it had never shutdown the ExecutorService. Thus, over a period of weeks, thousands of threads leaked, eventually causing the system to crash. (Technically, it didn't crash, but it did stop functioning properly, while continuing to run.)
Technically, I suppose this isn't a concurrency problem, but it's a problem relating to use of the java.util.concurrency libraries.
@Nitesh : Always Shutdown the Executor
---------------- Example - 14 ------------------------------------------------------------------------------

Dave Ray touched on this in his first answer, and in fact I also encountered a deadlock also having to do with invoking methods on listeners from within a synchronized method. I think the more general lesson is that method calls should not be made "into the wild" from within a synchronized block - you have no idea if the call will be long-running, result in deadlock, or whatever.
In this case, and usually in general, the solution was to reduce the scope of the synchronized block to just protect a critical private section of code.
Also, since we were now accessing the Collection of listeners outside of a synchronized block, we changed it to be a copy-on-write Collection. Or we could have simply made a defensive copy of the Collection. The point being, there are usually alternatives to safely access a Collection of unknown objects
.



 

Concurrency Speed Up Factor : Amdahl's Law

Amdahl's Law or Amdahl's argument defines the Maximum improvement of a collective system when the parts of the system are improved.

In parallelisation or Concurrency domain, it defines the Maximum SpeedUp Factor when then task executed on a single processor is divided into tasks that can be executed by Multiple Processors.

What is Speedup ?

Speed up factor is the ration of  'Time of Execution on a single processor' Vs 'Time of exection on a processor when the task is distributed in parallel across N processors'

Lets assume a Task , task_overall , is executed on a single processor. Let the time of exection = T(task_overall)
Now the nature of the task permits it be divided N number of smaller tasks.
Let each smaller task = task(i) , where i <= n and i = 1, 2,..,N
In order of improve time execution , we decide to execute each task(i) on a separate processor

Since the Time of execution on a single process IS PROPORTIONAL TO amount of task executed.
Let theTime of Execution for a chunk of task, task(i) = T(i) , where task(i) = task(over_all)/N fraction of the complete task.
So the new fractional time on each processor now is T(i) = T(task_overall)/N
Sped Up factor =  T(task_overall)/T(i)
E.g :
Let T(task_overall) = 10 hours. i.e It takes 10 hours to do a task on a single processor.
Let's divide this task into 7 chunks and allow it to processed by 7 processors.
Fractional task executed at each processor now is 1/7
So reduced Time on each processor has changed from 10 hours to 10*(1/7) = 10/7
This ratio of the previous time to this reduced time is the Speed Up Factor.
Speed Up Factor = T(over_all) / T(i) = 10/ (10/7) = 7 
So speed increases seven times by dividing a task in parallel across 7 processors.

Speed up Factor when only a part of task can be parallelised

Let the Task, task_overall consists of  a fraction of task that cannot be parallelised. i.e this fraction of task can be executed only serially , hence no speed up can be gained on this fraction when the number of parallel processes is increased

Task( over_all ) = Task( serial ) + Task( task that can be parallelised )
Task( over_all ) = K percent of Task ( serial )   + ( 1 - K ) percent of Task ( parallel )
Since Time is proportional to task
T( over_all ) = K % T ( over_all )  + ( 1 - K ) % T ( over_all)
T(i) = K % T ( over_all )  + ( 1 - K ) % T ( over_all) / N

So Speed Up Factor = T( over_all ) / T ( i )
Speed Up Factor = 1 / [ f + (1-f)/N ]

Here f = K % T( over_all ) i.e f represents the fraction of task that cannot be parallelized

Amdahl's Law can be used to measure increase in execution time when chunks of serial + parallel tasks is executed in parallel across multiple processors.

This Law can also be used to measure the performance gains on concurrent executions by threads



Wednesday, March 13, 2013

Different Concurrency Models - Java Vs Others

 

  Concurrency Models in Programming Languages

 

 Two main paradigms or models of concurrency in programming languages today, one is the shared-state that utilize multiple threads of execution synchronized using locks. The other is the share-nothing model that uses asynchronous messaging passing across lightweight processes or threads. 
An important factor why concurrency models are becoming very popular is because of the increasing multi-core processors trend, processor manufacturers are increasing the numbers of cores in processors, giving access to processors with 4, 8 or more cores capable of doing multiple calculations in parallel. There has been an increased interest in software engineering techniques to fully utilize the capabilities offered by these processors 

Shared-state model

Java supports the shared-state concurrency model that relies on synchronizing concurrent threads via concurrency-related objects, such as locks, barriers, and latches. All concurrency abstractions in Java are based on the shared mutable state paradigm and prone to the vulnerabilities of race conditions, blocking/contention and deadlocks. Since Java 1.5 or 5 new packages and classes have been added to improve the concurrency model and Java 7 has introduced Fork/Join. Fork/Join allows programmers to utilize finer-grained concurrency by allowing independent task components to be parallelized through recursive techniques. The individual subtasks known as the Fork/Join subtasks are mapped to a pool of threads that initiates and manages their execution using advanced queuing and scheduling techniques.
Thread management in Java imposes a number of challenges. In applications that demand execution of large number of concurrent processes in the order of thousands, JVM’s threads  show to be slow because of stack maintenance overhead, locking contention, and context switch overhead.
 Apart from the expensive context switching and the heavy weight nature of Threads in Java, the the main problem is the shared-state model itself. As a model of computation, threads are non-deterministic, and with the shared-state of the Java memory model, when multiple threads can potentially access a piece of information in memory concurrently, the programming model becomes highly complex.

Share-nothing model

The other model, the shared-nothing model, supported by Erlang has been used extensively to achieve a high level of concurrency. Erlang has popularized concurrency oriented programming by implementing an extremely lightweight process model on top of the operating system and platform. The actor model in Erlang is based on strong isolation semantics, self-contained processes with asynchronous message passing.
On the JVM Scala provides the same model supported by Erlang, Scala brings actor based concurrency on the JVM that allows the design of scalable concurrent applications that automatically take advantage of the multicore processors.  Scala actors provide a better model for programming concurrently in the virtual machine environment. Share-nothing model and Scala can be proved to have advantages over the JVM.

Scala Actors

Scala Actors are based on the shared-nothing and message passing model. An Actor is a mathematical model of concurrent computation that encapsulate data, code and its own thread of control, and communicate asynchronously using immutable message passing techniques. When the basic architecture is shared-nothing, each actor seems to act in its own process space. And the success and scalability of the actor implementation depends a lot on the ability of the language to implement lightweight processes on top of the native threading model. Every actor has its own mailbox for storing messages, implemented as asynchronous, race-free, non-blocking queues. Scala actors use Fork/Join as the underlying implementation and exposes a concurrency model that encourages shared-nothing structures.