mercredi 28 mai 2014

Indexing keys and values in MapDB

MapDB is a high performance pure java database, it provides concurrent collections (Maps, Sets and Queues) backed by disk storage or off-heap memory.
It provides a powerful mechanism to synchronize collections that can be used to build multiple indexes on a primary collection. Follows is an example showing how to index keys and also values of main collection.

1. define a serializable class
// this class should implement serializable in order to be stored
public class Person implements Serializable {
  String firstname; 
  String lastname; 
  Integer age; 
  boolean male;

  public Person(String f, String l, Integer a, boolean m) {     
    this.firstname = f;
    this.lastname = l; 
    this.age = a; 
    this.male = m;

  public boolean isMale() {
    return male;
  public String toString() {
    return "Person [firstname=" + firstname + ", lastname=" + lastname + ", age=" + age + ", male=" + male + "]";

2. Define a map of persons by id
// stores person under id
BTreeMap<Integer, Person> primary = DBMaker.newTempTreeMap();
primary.put(111, new Person("bIs9r", "NWmqoxFf", 92, true)); 
primary.put(111, new Person("4KXp8", "QrPsabf1", 31, false)); 
primary.put(111, new Person("eJLIo", "SJwJidWk", 6, true)); 
primary.put(111, new Person("LGW58", "vteM4khp", 42, false)); 
primary.put(111, new Person("tIM8R", "Rzq75ONh", 57, false)); 
primary.put(111, new Person("KqKRE", "BnpUV4dW", 26, true)); 

3. Define a gender-based index
// stores value hash from primary map
NavigableSet<Fun.Tuple2<Boolean, Integer>> genderIndex = new TreeSet<Fun.Tuple2<Boolean, Integer>>();

//1. gender-based index: bind secondary to primary so it contains secondary key
Bind.secondaryKey(primary, genderIndex, new Fun.Function2<Boolean, Integer, Person>() {
  public Boolean run(Integer key, Person value) {
    return Boolean.valueOf(value.isMale());
4. Use the gender-index to read all male persons
Iterable<Integer> ids = Fun.filter(genderIndex, true);
for(Integer id: ids) {

MapdDB offers multiple ways to define indexes on a given collection, It can also be extended to define specific kind of indexes. Follows is an example of implementing the Bitmap index in MapDB:
public static <K, V, K2> void secondaryKey(MapWithModificationListener<K, V> map, final Map<K2, Set<K>> secondary,
      final Fun.Function2<K2, K, V> fun) {
  // fill if empty
  if (secondary.isEmpty()) {
    for (Map.Entry<K, V> e : map.entrySet()) {
      K2 k2 =, e.getValue());
      Set<K> set = secondary.get(k2);
      if (set == null) {
        set = new TreeSet<K>();
        secondary.put(k2, set);
  // hook listener
  map.modificationListenerAdd(new MapListener<K, V>() {
    public void update(K key, V oldVal, V newVal) {
      if (newVal == null) {
        // removal
        secondary.get(, oldVal)).remove(key);
      } else if (oldVal == null) {
        // insert
        K2 key2 =, newVal);
        Set<K> set = secondary.get(key2);
        if (set == null) {
          set = new TreeSet<K>();
          secondary.put(key2, set);
      } else {
        // update, must remove old key and insert new
        K2 oldKey =, oldVal);
        K2 newKey =, newVal);
        if (oldKey == newKey || oldKey.equals(newKey))
        Set<K> set1 = secondary.get(oldKey);
        if (set1 != null) {
        Set<K> set2 = secondary.get(newKey);
        if (set2 == null) {
          set2 = new TreeSet<K>();
          secondary.put(newKey, set2);
This new index can be used as follows:
final Map<Boolean, Set<Integer>> bitmapIndex = new HashMap<Boolean, Set<Integer>>();
secondaryKey(primary, bitmapIndex, fun);

Continue here

samedi 3 mai 2014

Exploiting Big RAMs

Those are notes from a talk given by Neil Ferguson about how to take benefit of very large amount of memory to improve the performance of server-side applications.

With the increases in the amount of managed data of any enterprise or web application, there is a continuous need for storing more and more of data while providing a real-time access to it. The performance of such applications can be improved by making data available directly from memory and efficiently use the available huge amount of memory that may reach many many terabytes in a near future.

In fact, memory prices is continuously decreasing while the capacity increases to the point where terabytes of RAM will be available for servers in a near future. The cost of a 1MB of RAM was about $0.01 in Jan 2009 while it is $0.005 in 2013, source Memory Prices (1957-2013). In fact, we could by a workstation with 512GB of RAM  for $2299, and new Intell processors (e.g. Xeon) allow up to 144GB of RAM and more (around terabytes) for new generation processors dedicated to server-class machines. However, it still not practical to do anything with such an amount of RAM. Why?

Garbage Collection Limitations
In any Garbage-collected environment (like JVMs), if the object allocation rates overtake the rates at which the GC collect these objects then long GC pauses (time during which the JVM stops applications to just run the garbage collector) may become very frequent. One way to avoid such problem is to leave a plenty of free space in the heap. The thing is when you leave a third of 3GB it's not really a big deal compared to the case when leaving the third of 300GB even if it's the same ratio betwee free space and live data.
The bad news, is that even with large free space there may be some situations where GC pauses are too long typically for memory defragmentation.
You can improve an application performance with -XX:+ExplicitGCInvokesConcurrent as a workaround to avoid long pausses when System.gc() or Runtime.getRuntime().gc() are explicitly called (e.g. Direct ByteBuffer allocation).

Off-Heap storage
To overcome some of these limitations in JVMs or in Garbage-collected environment, allocation of memory off-heap can be a solution. This can be done in different ways:

1. Direct ByteBuffers
The NIO api allows the allocation of off-heap memory (i.e. not part of the process heap and not subject to GC) for storing long-lived data via ByteBuffer.allocateDirect(int capacity). Capacity is limited to what was specified with the JVM option -XX:MaxDirectMemorySize.
The allocation through ByteByffer has implications for GC (long pauses) when it is freed not fast enough and makes it not suitable for short-lived objects, i.e. allocating and freeing a lot of memory frequently.

2. sun.misc.Unsafe 
Direct ByteByffer itself relies on sun.misc.Unsafe.allocateMemory to allocate a big block of memory off-heap and on sun.misc.Unsafe.freeMemory to explicitly free it.
Here is a very sample implementation of a wrapper class based on the Unsafe API for managing off-heap memory:
public class OffHeapObject {
  // fields
  private static Unsafe UNSAFE;

  static {
    try {
      // get instance using reflection
      Field field = sun.misc.Unsafe.class.getDeclaredField("theUnsafe");
      UNSAFE = (sun.misc.Unsafe) field.get(null);
    }catch(Exception e){
      throw new IllegalStateException("Could not access theUnsafe instance field");
  private static final int INT_SIZE = 4;
  // base address for the allocated data
  private long address;
  // constructor
  public OffHeapObject(T heapObject) {
    // serialize data
    byte[] data = serialize(heapObject);
    // allocate off-heap memory
    address = UNSAFE.allocateMemory(INT_SIZE + data.length);
    // save the data size in first bytes
    UNSAFE.putInt(address, data.length);
    // Write data byte by byte to the allocated memory
    for(int i=0; i < data.length; i++) {
      UNSAFE.putByte(address + INT_SIZE + i, data[i]);

  public T get() {
    int length = UNSAFE.getInt(address);
    // read data from the memory
    byte[] data = new byte[length];
    for(int i = 0; i < data.length; i++) {
      data[i] = UNSAFE.getByte(address + INT_SIZE + i);
    // return the deserialized data
    return deserialize(data);
  // free allocate space to avoid memory leaks

  public void deallocate() {
    //TODO make sure to not call this more than once
The OffHeapObject can be used for instance to store values of a cached data, e.g. using Google Guava to store keys-OffHeapObject pairs where the latter wraps data in the off-heap memory. This way GC pauses can be considerably reduced as these objects are just references and do not occupy big block of heap memory. Also, the process size may not grow indefinitely as fragmentation is reduced.

Note that the implementation of the OffHeapObject is very basic, there is a performance impact for using off-heap memory. In fact, everything needs to be serialized on writes to off-heap and de-serialized on read from off-heap memory and these operations has some overhead and reduced throughput compared to on-heap storage.
Furthermore, not every object can be stored in the off-heap memory for instance the OffHeapObject that keep a reference to a block of memory in the off-heap is actually stored in the heap.
The performance of this implementation may be enhanced with techniques like data alignment.

Some existing caches based on off-heap storage

continue from 28:39
Big RAM: How Java Developers Can Fully Exploit Massive Amounts of RAM 

  • Understanding Java Garbage Collection presentation at JUG Victoria Nov/2013 - Azul Systems
  • Measuring GC pauses with jHiccup - Azul Systems
  • A good documentation of the Unsafe API can be found in this blog post.
  • How Garbage Collection works in Java - blog post.

Random resources related to Docker



Continuous Integration

Environment configuration



  • Atomic project - Deploy and Manage your Docker Containers 
  • GearD - The Intersection of PaaS, Docker and Project Atomic 
  • Classification of the ecosystem of startups based on Docker 
  • Slides from DockerFr Meetup on Docker ecosystem
  • OpenCore a Big Data (Hadoop) as a Service provider

API Client
work in progress