Wednesday, October 30, 2013

Fork and join in Java 7 – JSR 166 concurrency utilities

One of the most interesting improvements of Java 7 is the better support of concurrency. With JSR 166 Concurrency Utilities we get some very helpful improvements of concurrency. From my point of view the fork-join library has a high potential for practical use in software engineering. Fork and join provides a very easy programming model for algorithms which can be implemented as recursive task. There are a lot of algorithms that can be implemented with divide and conquer algorithms.
In the next years we will see an increasing number of cores in standard desktops, notebooks and server computers. The reason for this is easy: It’s cheaper to add additional cores than to build a faster single processor. So, we will have to write more software which supports concurrency to take benefit of better hardware.
To be honest, I don’t like concurrency. My personal rule is „You need a good reason to implement concurrency and if you have to do it be really careful.“ In the last years, I saw more buggy implementations than working. This is the reason why I like the fork & join library. A clear programming model which implements the boiler plate code prevents you from errors. But, please if you intend to use fork and join take some time to understand the behavior.
The sample in file #1 and #2 is very similar to the sample code in Java 7 documentation. In general, Fibonacci numbers with a recursive algorithm is not a good idea because there is a better linear solution (compare http://nayuki.eigenstate.org/page/fast-fibonacci-algorithms), but it is easier to implement and understand than others. So, let’s have a look at the sample:
// File #1: FibonacciTask.java  [error handling, parameter validation and asserts removed] 

package com.sprunck.sample;

import java.util.concurrent.RecursiveTask;

public class FibonacciTask extends RecursiveTask {

    private static final long serialVersionUID = 1L;

    private final long inputValue;

    public FibonacciTask(long inputValue) {
        this.inputValue = inputValue;
    }

    @Override
    public Long compute() {

        if (inputValue == 0L) {
            return 0L;
        } else if (inputValue <= 2L) {
            return 1L;
        } else {
            final FibonacciTask firstWorker = new FibonacciTask(inputValue - 1L);
            firstWorker.fork();
            
            final FibonacciTask secondWorker = new FibonacciTask(inputValue - 2L);
            return secondWorker.compute() + firstWorker.join();
        }
    }
}


// File #2: FibonacciTaskTest.java 

package com.sprunck.sample;

import java.util.concurrent.ForkJoinPool;
import junit.framework.Assert;
import org.junit.Test;

public class FibonacciTaskTest {

    // it makes no sense to create more threads than available cores (no speed improvement here)
    private static final int AVAILABLE_PROCESSORS = Runtime.getRuntime().availableProcessors();

    // create thread pool
    private final ForkJoinPool pool = new ForkJoinPool(AVAILABLE_PROCESSORS);

    @Test
    public void testFibonacciArray() {

        // more test data: http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibtable.html
        long results[] = { 0L, 1L, 1L, 2L, 3L, 5L, 8L, 13L, 21L, 34L, 55L, 89L, 144L, 233L, 377L, 610L, 987L, 1597L,
                2584L, 4181L, 6765L };
        for (int inputValue = 0; inputValue < results.length; inputValue++) {

            final FibonacciTask task = new FibonacciTask(inputValue);
            System.out.print("Fibonacci(" + inputValue + ") = ");

            final long result = pool.invoke(task);
            System.out.println(result);

            Assert.assertEquals(results[inputValue], result);
        }
    }
}

// Output of FibonacciTaskTest.java
Fibonacci(0) = 0
Fibonacci(1) = 1
Fibonacci(2) = 1
Fibonacci(3) = 2
Fibonacci(4) = 3
Fibonacci(5) = 5
Fibonacci(6) = 8
Fibonacci(7) = 13
Fibonacci(8) = 21
Fibonacci(9) = 34
Fibonacci(10) = 55
Fibonacci(11) = 89
Fibonacci(12) = 144
Fibonacci(13) = 233
Fibonacci(14) = 377
Fibonacci(15) = 610
Fibonacci(16) = 987
Fibonacci(17) = 1597
Fibonacci(18) = 2584
Fibonacci(19) = 4181
Fibonacci(20) = 6765
So far it is a simple and clear solution. No boiler plate code for concurrency, e.g. thread synchronization.
But I’d like to encourage you to have a deeper look in what happens in the solution. In files #3 and #4 you find an enhanced version of the same program. The only difference between the first and second version is some code to trace what happens during execution and a small slowTask() to simulate more realistic behavior.
// File #3: FibonacciTaskTraces.java 

package com.sprunck.sample;

import java.util.concurrent.RecursiveTask;

public class FibonacciTaskTraces extends RecursiveTask {

    private static final long serialVersionUID = 1L;

    // just needed to format debug output
    public static final String OUTPUT_PREFIX = " | ";

    private final String prefix;

    private final long inputValue;

    public FibonacciTaskTraces(long inputValue, final String prefix) {
        this.inputValue = inputValue;
        this.prefix = prefix;
    }

    @Override
    public Long compute() {

        if (inputValue == 0L) {
            slowTask();
            return 0L;
        } else if (inputValue <= 2L) {
            slowTask();
            return 1L;
        } else {
            final long firstValue = inputValue - 1L;
            System.out.println(prefix + " - Fibonacci(" + firstValue + ") <- -="" 1000="" 100="" 2l="" a="" disturbing="" fibonacci="" fibonaccitasktraces="" final="" firstvalue="" firstworker.fork="" firstworker.join="" firstworker="new" for="" fork="" getname="" i="i" inputvalue="" int="" join="" just="" k="" long="" longer="" other="" out="" output_prefix="" pre="" prefix="" private="" result="" return="" running="" secondvalue="" secondworker="new" simulate="" slowtask="" system.out.println="" task="" the="" thread.currentthread="" threads="" void="" with="">
// File #4: FibonacciTaskTracesTask.java

package com.sprunck.sample;

import java.util.concurrent.ForkJoinPool;
import junit.framework.Assert;
import org.junit.Test;

public class  FibonacciTaskTracesTest {

    // it makes no sense to create more threads than available cores (no speed improvement here)
    private static final int AVAILABLE_PROCESSORS = Runtime.getRuntime().availableProcessors();

    // create thread pool
    private final ForkJoinPool pool = new ForkJoinPool(AVAILABLE_PROCESSORS);

    @Test
    public void testFibonacciArrayTraces() {

        // more test data: http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibtable.html
        long results[] = { 0L, 1L, 1L, 2L, 3L, 5L, 8L, 13L };
        for (int inputValue = 0; inputValue < results.length; inputValue++) {

            final FibonacciTaskTraces task = new FibonacciTaskTraces(inputValue, " | ");
            System.out.println("invoke Fibonacci(" + inputValue + ")  <- assert.assertequals="" final="" getname="" inputvalue="" long="" n="" pre="" result="" results="" system.out.println="" thread.currentthread="">
// Output of FibonacciTaskTracesTest.java
invoke Fibonacci(0)  <- -="" 13="" 2="" 3="" 5="" 8="" fibonacci="" fork="" forkjoinpool-1-worker-1="" forkjoinpool-1-worker-2="" invoke="" join="" main="" pre="" result="13">
The output gives you now deeper look into the processing of the program. Following ways of Fibonacci numbers calculation appear:
  • the first three Fibonacci numbers are processed in the main thread,
  • the next Fibonacci number has been processed in just one new thread (ForkJoinPool-1-worker-1) and
  • starting wiht the fifth Fibonacci number two threads (ForkJoinPool-1-worker-1 and ForkJoinPool-1-worker-2) have been used. 
The algorithm is inefficient, because there are a lot of redundant operations (re-calculation of the same Fibonacci number) in the processing. In a real life application you should be careful with this kind of inefficient algorithms. Some traces help to understand what happens.
Recomendations
  1. The use of fork and join is easy and straightforward, but use some time to trace and understand your implementation.
  2. Sometimes it is helpful to implement two versions of the same algorithm (one for analysis and a second for production).
  3. Spend some time in designing and understanding better concurrency algorithms is a good investment.
The probability calculator has been developed in this way (Demo of probability calculator – PCALC).

No comments: