02 August 2010 ~ 2 Comments

Getting Started with Terracotta Toolkit – Part 1



Terracotta Toolkit was released as part of Terracotta 3.3.0. The Toolkit is a delight for developers working on Scalable Apps, Frameworks. For more details on features of Toolkit, refer this link.
In this post we shall work out a few samples using the Toolkit. We shall start with looking at Cluster Events, Clustered Queues and move to clustered Locks.

Lets start with some general information about Toolkit. Refer Toolkit javadoc for more details here

Pre-requisite

  • Download Terracotta 3.3.0 from Download page
  • Include following terracotta-toolkit-1.0-runtime-1.0.0.jar into your classpath for using Toolkit. The jar is present inside Terracotta_Install_dir/common folder.
  • Initializing the Toolkit

    Before any operations can be performed using Toolkit, it needs to be initialized first. The initialization code is very simple, just single line

    ClusteringToolkit clustering = new TerracottaClient("localhost:9510").getToolkit();
    

    This line initializes the Terracotta Client and gets an instance of Toolkit to work with. The argument passed to the TerracottaClient is the IP Address and Port of the Terracotta Server.

    Have created a simple function for initialization of the Toolkit.

    ClusteringToolkit clusterToolkit = null;
    
    public void initializeToolKit(String serverAdd) {
    clusterToolkit = new TerracottaClient(serverAdd).getToolkit();
    }
    


    Playing with Toolkit Queue

    Toolkit provide various Data structures that you can use, Queue being one of them.

    Lets try and create a Producer-Consumer sample, using Toolkit Queue. Here is what we are trying to do




    We have a producer which is going to produce some work, and the consumer shall process the work in different JVM's.

    To achieve we need to do the following

    • Create a Work object which shall be pushed by Producer to consumers.
    • We need to create a Producer that shall produce the work
    • We need to create a Consumer which shall consume the work. We can have multiple instances of Consumers

    Lets have a look at Work Object

    class Work implements Serializable, Runnable {
    private String workItem;
    
    Work(String workItem) {
    this.workItem = workItem;
    }
    
    public String getWorkItem() {
    return workItem;
    }
    
    public void setWorkItem(String workItem) {
    this.workItem = workItem;
    }
    
    public void run() {
    System.out.println(Thread.currentThread().getName() + "processing - "+getWorkItem());
    }
    }
    

    NOTE: The Objects that shall be pushed onto the Queue should implement Serializable.

    What we have done here. Its a very simple implementation, where we just print what the Producer pushed.
    Implementing Runnable is not necessary. I just added it so that I can push the work unit directly to an Executor

    Creating Producer

    
    static final String QUEUE_NAME = "DATA_QUEUE";
    
    public void startProducer(int capacity) {
    BlockingQueue<byte[]> queue = clusterToolkit.getBlockingQueue(QUEUE_NAME, capacity);
    System.out.println("Starting Producer....");
    int i = 0;
    while (i < capacity) {
    queue.add(serializeObject(new Work(""+i++)));
    try {
    Thread.sleep(1000);
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
    }
    }
    

    Creating a clustered Queue is a one line job. We have already initialized the Toolkit, now we just call getBlockingQueue() API from the toolkit, and we get a Distributed Queue backed by Terracotta :)
    After creating the Queue, we keep on adding work to the queue till its capacity is reached.

    Creating Consumer

    public void startConsumer(int capacity) {
    BlockingQueue<byte[]> queue = clusterToolkit.getBlockingQueue(QUEUE_NAME, capacity);
    
    // Lets add some threads to consumer
    ExecutorService executors = Executors.newFixedThreadPool(5);
    
    // a dumb way for a loop
    while(true) {
    try {
    executors.execute((Work)deserializeObject(queue.take()));
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
    }
    }
    

    Aha! no difference in creation, the only difference is our Consumer calls take() API on the Queue to consume the work objects. Here we have a fixed size Thread pool that keep consuming the work that the producer is pushing.

    This is it. Our implementation is complete :) We are ready to run the sample. To execute we need to do three steps

    • Start Terracotta Server
    • Start Producer
    • Start one or more Consumers

    See them in action yourself.

    The complete code can be found at http://code.google.com/p/terracotta-samples/

    Some ideas around what can we do with Distributed queue
    1. Distributed SEDA stages across JVM's
    2. Distributed Task processing
    ... add your own ideas..

    Would be interested in knowing what you do around Terracotta Toolkit Queue. Please do add comments with your implementations.

    What's Next?

    In the next post we shall be exploring more about the Cluster Events as part of Terracotta Toolkit.

    2 Responses to “Getting Started with Terracotta Toolkit – Part 1”


    Leave a Reply