Archiv nach Kategorien: Tutorials

SBT Native Packager – Multi Module / Assembly and Custom Formats

Lately the questions on Stackoverflow and the Issues around sbt-native-packager were often about topics concerning

  • Multi Module Builds
    Aggregating multiple projects into a single native package
  • SBT-Assembly jar
    Aggregate everything into a  fat-jar and package this instead of each single jarfile
  • Change Mappings
    Changing default mappings, remove ones you don’t need or add new ones
  • Custom Formats
    Creating your own packaging type. The SBT part will only take you minutes.

Within the next 0.7.x and 0.8.x release we will update the docs as well, but until then you can checkout the sbt-native-packager-examples on github.

From Java 7 Futures to Akka actors with Scala

This blog post will show you how a step by step transition from a “java 7 and j.u.c.Future” based implementation to an “akka actor written in scala solution” looks like. It will take four steps, which are

  1. Java 7 and futures
  2. Java 8 and parallel streams
  3. Scala and futures
  4. Scala and actors with the ask pattern
  5. Scala and actors (almost) without the ask pattern

The complete code repository can be found on github.

The application

The application is simple ItemService which allows you got get all the Items a client, identified by an integer id, owns. The goal is to build some statistics over the different items:

  • How many clients have items with price x
  • How many items have price price x

Application Flowchart

The ItemService connects (in theory) to a database, which takes a bit longer to load the data, so we don’t get the data sequentially, but concurrently. In the end we gather the results and calculate our statistics.

To see a bit what’s going on the ItemService prints out the current threadname after he finishes the getItems(clientId) call.

Java 7 and futures

The first implementation uses j.u.c.Future and some functional sugar provided by google guava’s ListenableFutures. First thing to do is to implement a Callable<List<Item>> which can be started via an j.u.c.ExecutorService.

public static class ItemLoader implements Callable<List> {
 
      private final int clientId;
 
      public ItemLoader(int clientId) {
          this.clientId = clientId;
      }
 
      @Override
      public List call() throws Exception {
          ItemService service = new ItemService();
          return service.getItems(clientId);
 
  }

Nothing fancy. An ItemLoader is like a job which get’s configured via its constructor (for what client should I load items) and then instantiates an ItemService and gets the items.

Now create an ExecutorService and submit the jobs (ItemLoader instances).

<span class="n">List</span><span class="o" style="font-weight: bold;">&lt;</span><span class="n">Integer</span><span class="o" style="font-weight: bold;">&gt;</span> <span class="n">clients = ...;</span>
// I tried some different ExecutorServices just for fun (and later some benchmarks, hopefully)
int parallelism = 4;
ListeningExecutorService pool = MoreExecutors.listeningDecorator(Executors.newWorkStealingPool(parallelism));
 
// Submit all the futures
List<ListenableFuture<List<Item>>> itemFutures = new ArrayList<>();
for (Integer client : clients) {
    ListenableFuture<List<Item>> future = pool.submit(new ItemLoader(client));
    itemFutures.add(future);
}

The MoreExecutors.listeningDecorator(..) call is from guava which decorates our initial ExecutorService. This allows us to use a very neat transition from “List of Futures with type Item” to a “Future of List with Type Item”.

// Futures == com.google.common.util.concurrent.Futures
// convert list of futures to future of results
ListenableFuture<List<List<Item>>> resultFuture = Futures.allAsList(itemFutures);
 
// blocking until finished - we only wait for a single Future to complete
List<List<Item>> itemResults = resultFuture.get();

You may notice that we have a list of list. That’s because the ItemService returns a list of items. Lucky google guava helps us out once more with Iterables.concat, which is mostly called flatten in functional languages. This operation flattens a list of lists of type A to a list of type A.

Iterable items = Iterables.concat(itemResults);

From here on you can do what ever you want with the list of items.

Pro Contra
Easy implementation No failure handling (what job failed?)
Easy configuration of parallelism (ExecutorService) blocking
verbose

Java 8 Streams

Next, we will use the new awesome java 8 feature parallel streams. The usage feels a lot like the scala parallel collections and that’s why I only take a look (at the moment) at the java 8 feature as we have even more tools for concurrent/parallel programming in scala.

Talk is cheap, show me the code:

List clients = ...;
// create a parallel stream from the list of clients and map each of them to a ItemService call
Stream<List<Item>> serviceResults = clients.parallelStream()
     .map(client -> new ItemService().getItems(client));
 
// flatten a stream of lists of type Item to a steam of type Item
Stream items = serviceResults.flatMap(itemList -> itemList.stream());

IMHO the syntax for flattening a stream looks a bit odd to me, but it’s the way to go. However this is the code!

Pro Contra
Short and expressive implementation No failure handling (what job failed?)
blocking
Hard to configure parallelism

Scala and Futures

Now we move into the Scala universe. As I mentioned above, I will skip the scala parallel collections as they are pretty similiar to the parallel streams in java 8. Actually the future based implementation is pretty similar, too, but I think it’s a better start into the scala concurrency world.

First we need an ExecutionContext, which is similar to the ExecutorService. In fact you can create ExecutionContexts from ExecutorServices. For this small application we use the standard fork-join-pool.

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{ Await, Future }
import scala.concurrent.duration._

Now we can create Futures directly inside the code.

val clients = 1 until 10 toSeq
 
// start the futures
val itemFutures: Seq[Future[Seq[Item]]] = clients map { client =>
  Future {
    new ItemService getItems client
  }
}

I wrote out the explicit types so you can see what happens here. Everything inside the Future declaration body will be executed inside an new, anonymous future. Now we transform the again the list of futures to a future of list. Scala brings this out-of-the-box.

// convert list of futures to future of results
val resultFuture: Future[Seq[Seq[Item]]] = Future sequence itemFutures

The next step is the most important one you should take away from this implementation, because until now there was no real difference to the java 7 implementation. In scala you can call map on a future, which returns a new future that has the result value mapped according to your map function. This helps you write non-blocking and readable code, because

  • you don’t have to write the complete logic into one future, instead you can break it up into different methods. You can also a different result futures based on a single loading future
  • you don’t have to wait for the results to transform them

A lot of  talk for one line of code. We flatten the list of lists.

// flatten the result
val itemsFuture: Future[Seq[Item]] = resultFuture map (_.flatten)

After all we have to wait in this example for the results to be available.

// blocking until all futures are finished, but wait at most 10 seconds
val items = Await.result(itemsFuture, 10 seconds)

If you don’t need to wait and can handle the result at some point in time take a look at the callback functions described in the documentation of scala futures. These provide the possibility to react on failure. For futures and timeouts you can read another blog post here.

Pro Contra
Short and expressive implementation Failure handling only in callbacks
Can be made non-blocking if blocking you have to define a lot of timeouts

Scala and actors with the ask pattern

All of the solutions above are valid if don’t care about error handling and fault tolerance. This can be okay in some cases, but it’s not much effort to have these too :) And that’s were actors come in. If you have no idea what actors are scroll through these slides from Jonas Bonér the creator of akka, which should give you enough inside.

The first implementation will use the ask pattern, which generates futures from messages you sent. This is often need at the boundarys of an actor system to the non-actor-based part of your code. In general I made the experience to move this boundary towards “doing everything possible inside the actor system”. You’ll get the most out of akka this way.

Let’s take a look at our actor implementation

package actors
 
import akka.actor.Actor
import services.scala.{ Item, ItemService }
import ItemServiceActor._
 
class ItemServiceActor extends ItemService with Actor {
  def receive = {
    case GetItems(client) => sender ! getItems(client) // async answer
  }
}
 
/** Message API */
object ItemServiceActor {
  case class GetItems(client: Int)
}

Bascially it wraps our existing service into an actor. You may have questioned yourself why I instantiated always a new service when calling it. Well, the service itself was not thread-safe in anyway. However now as a service is represented by an actor it is automatically thread-safe, because and actor guarantees to only process one message at a time (which in this case means method call to getItems()).

Now we do the plumbing and ask the ItemServiceActor for a list of items.

// Create the actor system which manages all the actors
val system = ActorSystem()
val itemService = system.actorOf(Props[ItemServiceActor], "itemService")
 
// available clients
val clients = 1 until 10 toSeq
 
// how long until the ask times out
implicit val timeout = Timeout(10 seconds)
// start the futures
val itemFutures: Seq[Future[Seq[Item]]] = clients map { client =>
  // this is the ask: itemService ? GetItems(client)
  (itemService ? GetItems(client)).mapTo[Seq[Item]]
}

The code is almost self-explanatory, expect the mapTo[Seq[Item]]. At the moment akka raw actor implementation doesn’t provide any typesafty for the messages. So an ask will always return a future with type Any, which then is mapped to the specific type you want with the mapTo call. The rest of the code is similar to the scala and futures code.

But what about error handling? The ask pattern provides a bit more functionality then the normal futures. In a follow blog post I will show you how to handle a flaky ItemService. In short you can do this

val future = akka.pattern.ask(actor, msg1) recover {
  case e: ArithmeticException => 0
}
Pro Contra
Threadsafe service (less instances needed) ask timeout-hell

Scala and actors (almost) without the ask pattern

As I mentioned in the last chapter, moving the boundary towards doing everything inside the actor system is a good thing, we will do this. By doing so we will also get a glimpse on the different and powerful error handling strategies of akka (or the actor model itself).

This implementation needs a bit more on the message api side, so we start here.

/**
 * Defining the aggregation API
 */
object ItemServiceAggregator {
 
  // ---- PUBLIC ----
  case class GetItemStatistics(clients: Seq[Int])
  case class ItemStatistics(results: Seq[(Int, Seq[Item])])
 
  // ---- INTERNAL ----
  private[actors] case class GetItems(client: Int, job: Long)
  private[actors] case class Items(client: Int, items: Seq[Item], job: Long)
 
  private[actors] case class Job(id: Long, source: ActorRef, clients: Seq[Int], results: Seq[(Int, Seq[Item])] = Seq.empty) {
    def isFinished(): Boolean = results.size == clients.size
  }
 
  /** Worker actor similar to ask pattern ServiceActor */
  class ItemServiceActor extends ItemService with Actor {
    def receive = {
      case GetItems(client, job) => sender ! Items(client, getItems(client), job) // async answer
    }
  }
 
}

The main difference are

  • A result case class ItemStatistics which holds a sequence with (client, items)
  • A case class Job which encapsulates the state of a GetItemStatistics request
  • The actual ItemServiceActor is almost identical, but with preserving the job id

Now to the actual ItemServiceAggregator which starts a job and distributes the work to the ItemServiceActors.

package actors
 
import akka.actor._
import akka.routing.RoundRobinPool
import scala.collection.mutable.{ Map => MutableMap }
import services.scala.ItemService
import services.scala.Item
import ItemServiceAggregator._
 
class ItemServiceAggregator extends Actor with ActorLogging {
 
  // Create a pool of actors with RoundRobin routing algorithm
  val worker = context.system.actorOf(
    props = Props[ItemServiceAggregator.ItemServiceActor].withRouter(RoundRobinPool(10)),
    name = "itemService"
  )
 
  /** aggregation map: (request, sender) -&gt; (client, items) */
  val jobs = MutableMap[Long, Job]()
 
  // A VERY basic jobId algorithm
  var jobId = 0L
 
  def receive = {
    case GetItemStatistics(clients) =>
      jobId += 1
      jobs(jobId) = Job(jobId, sender(), clients)
      log info s"Statistics for job [$jobId]"
 
      // start querying
      clients foreach (worker ! GetItems(_, jobId))
 
    // Get results from a job
    case Items(client, items, jobId) =>
      val lastJobState = jobs(jobId)
      val newJobState = lastJobState.copy(
        results = lastJobState.results :+ (client, items)
      )
 
      if (newJobState isFinished ()) {
        // send results and remove job
        newJobState.source ! ItemStatistics(newJobState.results)
        jobs remove jobId
      } else {
        // update job state
        jobs(jobId) = newJobState
      }
  }
 
}

The logic of the aggregator is based on a few steps

  1. Receive a GetItemStatistics(clients) message.
  2. Start a new job by increment the jobId counter and store the job state in a mutable map
  3. Receive the Items results as message.
  4. If all clients have been requested, send the results, else just aggregate the results

Based on this scheme you can easily add more error handling as needed. For a per-job-error-handling one can create a JobActor for each job, which calls setReceiveTimeout, which sends the actor a ReceiveTimeout message, when being idle for too long.

Summary

First of all, use the implementation that suits your needs! If you don’t need a full-blown error handling or fine grained dispatching logic, go for the easy ones first. Scala futures are really simple and powerful. If you like more syntactic sugar take a look at SIP-22 – Async, which helps you writing less code, while using futures.

Starting easy also works if you application grows more than you initially expected. Using Scala futures makes it easy to switch to actors as you can easily extract the logic into actors and using the ask pattern to get the same future. Then you can refactor the actor inside step by step as needed.

The code can be found on Github

How to add a maven-plugin jar as dependency to sbt

I want to use the jdeb library to integrate in one of my own libraries.
Since it is a maven-plugin it’s packaged as a maven plugin. SBT
does not resolve the jars if you just add it as a dependency. This
will do the trick:

"org.vafer" % "jdeb" % "1.2" artifacts (Artifact("jdeb", "jar", "jar"))

This one is for sbt 0.13.5!

Playframework and RequireJS

RequireJS Logo

As a backend developer I like tools that help me to structure my code. Doing more and more frontend stuff I finally got time to learn some of the basics of RequireJS. Unfortunately the tutorials how to compose playframework and requireJS with multipage applications is not too big. There’s is some with AngularJS, but I didn’t want to port my applications to two new systems.

Application structure

For a sample application, I implemented to pages:

Both will have their own data-entry-point and dependencies. The index page looks like this

@(message: String)
 
@main("RequireJS with Play") {
    // html here
 
 @helper.requireJs(core = routes.Assets.at("javascripts/require.js").url,
                   module = routes.Assets.at("javascripts/main/main").url)
 
}

The routes file is very basic, too:

GET    /                controllers.Application.index
GET    /dashboard       controllers.Application.dashboard
POST   /api/sample      controllers.Application.sample
 
### Additions needed
GET    /jsroutes.js     controllers.Application.jsRoutes()
### Enable www.WebJars.org based resources to be returned
GET    /webjars/*file   controllers.WebJarAssets.at(file)
GET    /assets/*file    controllers.Assets.at(path="/public", file)

The javascript folder layout

  • assets/javascripts
    • common.js
    • main.js
    • dashboard
      • chart.js
      • main.js
    • lib
      • math.js

How does it work?

First you define a file common.js, which is used to configure requirejs.

(function(requirejs) {
    "use strict";
 
    requirejs.config({
        baseUrl : "/assets/javascripts",
        shim : {
            "jquery" : {
                exports : "$"
            },
            "jsRoutes" : {
                exports : "jsRoutes"
            }
        },
        paths : {
            "math" : "lib/math",
            // Map the dependencies to CDNs or WebJars directly
            "_" : "//cdnjs.cloudflare.com/ajax/libs/underscore.js/1.5.1/underscore-min",
            "jquery" : "//localhost:9000/webjars/jquery/2.0.3/jquery.min",
            "bootstrap" : "//netdna.bootstrapcdn.com/bootstrap/3.0.0/js/bootstrap.min",
            "jsRoutes" : "//localhost:9000/jsroutes"
        // A WebJars URL would look like
        // //server:port/webjars/angularjs/1.0.7/angular.min
        }
    });
 
    requirejs.onError = function(err) {
        console.log(err);
    };
})(requirejs);

The baseUrl is important, as this will be the root path from now on. IMHO this makes things easier than, relative paths.

The shim configuration is used to export your jsRoutes, which is defined in my Application.scala file. Of course you can add as many as you want.

The paths section is a bit tricky. Currently it seems there’s no better way than hardcoding the urls, like “jsRoutes” : “//localhost:9000/jsroutes”, when you use WebJars.

Define and Require

Ordering is crucial! For my /dasbhoard page the /dasbhoard/main.js is my entry point

// first load the configuration
require(["../common"], function(common) {
   console.log('Dashboard started');
 
   // Then load submodules. Remember the baseUrl is set:
   // Even you are in the dasboard folder you have to reference dashboard/chart
   // directly
   require(["jquery", "math", "dashboard/chart"], function($, math, chart){
       console.log("Title is : " + $('h1').text());
       console.log("1 + 3 = " + math.sum(1,3));
       console.log(chart);
 
       chart.load({ page : 'dashboard'}, function(data){
           console.log(data);
       }, function(status, xhr, error) {
           console.log(status);
       });
 
   });
});

For the chart.js

// first the configuration, then other dependencies
define([ "../common", "jsRoutes" ], {
    load : function(data, onSuccess, onFail) {
        var r = jsRoutes.controllers.Application.sample();
        r.contentType = 'application/json';
        r.data = JSON.stringify(data);
        $.ajax(r).done(onSuccess).fail(onFail);
    }
})

Links

Gradient Decent with Scala

Currently I’m watching a Scala and a Maschine Learning course on coursera.org and
wanted to try some simple stuff for myself. I choose Gradient Decent would be a
perfect start to try some functional programming.

The code

import scala.math._
 
object GradientDecent extends App {
 
  val alpha = 0.1 //size of steps taken in gradient decent
  val samples = List((Vector(0.0, 0.0), 2.0), (Vector(3.0, 1.0), 12.0), (Vector(2.0, 2.0), 18.0))
 
  var tetas = Vector(0.0, 0.0, 0.0)
  for (i 
        teta - (alpha / samples.size) * samples.foldLeft(0.0) {
          case (sum, (x, y)) =&gt; decentTerm(sum, 1, x, y, tetas)
        }
      case (teta, i) =&gt;
        teta - (alpha / samples.size) * samples.foldLeft(0.0) {
          case (sum, (x, y)) =&gt; decentTerm(sum, x(i - 1), x, y, tetas)
        }
    }
  }
 
  def decentTerm(sum: Double, x_j: Double, x: Vector[Double], y: Double, tetas: Vector[Double]) = {
    sum + x_j * (h(x, tetas) - y)
  }
 
  def h(x: Vector[Double], teta: Vector[Double]): Double = {
    teta(0) + {
      for (i  sum + x)
  }
 
}

And thats pretty much everything. This is just a first version and I’m sure somebody would find ways
to optimize it. However even this hacked version is very short and handsome :)

Update
The code snippet here is a gradient decent for performing linear regression.

DevVM Part 1 – Gerrit on Ubuntu 12.04 Server

I’m currently working on a little development VM and want to share some of my insides I gain and how I managed to get things work. The series will start with the tutorial to install Gerrit.

What is Gerrit?

Gerrit provides a powerful server to integrate a code-review process in your git-driven development process. These are the main reasons I picked gerrit:

  • Support git as versioning system – awesome
  • Integration with buildservers like jenkins to run test automatically and the CI-server is a part of the code review process
  • Great Eclipse integration with EGit

Install Gerrit

All you need is root shell access to your server and a working internet connection (surprise!)

Generate gerrit2 user

First we generate a group gerrit2 and a user gerrit2 with a home directory located at /usr/local/gerrit2

sudo addgroup gerrit2
sudo adduser --system --home /usr/local/gerrit2 --shell /bin/bash --ingroup gerrit2 gerrit2

I use my own MySQL database instead of the integrated h2 database. You have to generate a user gerrit2 too and a database called reviewdb. On the shell you can do this via

mysql --user=root -p
CREATE USER 'gerrit2'@'localhost' IDENTIFIED BY 'secret';
CREATE DATABASE reviewdb;
ALTER DATABASE reviewdb charset=latin1;
GRANT ALL ON reviewdb.* TO 'gerrit2'@'localhost';
FLUSH PRIVILEGES;
exit;

Last thing to do as a root is to generate a default config file for gerrit. When

sudo touch /etc/default/gerritcodereview

and insert with a editor of your choice

GERRIT_SITE=/usr/local/gerrit2

Now we log into our gerrit2 user and install gerrit.

sudo su gerrit2
cd ~
wget http://gerrit.googlecode.com/files/gerrit-2.4.2.war
java -jar gerrit-2.4.2.war init -d /usr/local/gerrit2

The address may have altered, so check that.

Fill out everything for your needs. The database password is your secret. Check that everything works b starting gerrit with

cd ~/bin
./gerrit.sh start
./gerrit.sh stop

When everything worked fine, you can updated your init.d to start gerrit automatically on startup. You do this by the following commands.

sudo ln -snf /usr/local/gerrit2/bin/gerrit.sh /etc/init.d/gerrit
sudo update-rc.d gerrit defaults

Now your gerrit sever starts each time your machine starts.

Troubleshooting

I made some errors during the installation which almost drove me crazy.

Authentication via OpenID – Register new Email

It’s great that you can access the gerrit server with OpenID. However if you have another email on your OpenID account (like *@gmail) than you have on your ssh-key (like *@your-company.com) than you must register a new Email on your account. That does only work if your smtp-server is correctly configured.

By default gerrit uses “user@hostname” as sender. Well for me it was “gerrit@server” which isn’t a valid emailadress. You can configure your user in the user-section of gerrit.

[user]
      name = Your name
      email = name@your-company.com

Maven – Tycho, Java, Scala and APT

This tutorial shows a small project which is build with maven-tycho and the following requirements:

  • Mixed Java / Scala project
  • Eclipse plugin deployment
  • Eclipse Annotation Processing (APT)
  • Manifest-first approach
  • Java 7 / Scala 2.9.2
That doesn’t sound too hard. In fact it isn’t, if you are familiar with maven and how tycho works. 

Setting up maven

First download maven 3 and configure it.
I created two profiles in my settings.xml and added some repositories.
My two profiles are tycho-build and scala-build which are activated with
the corresponding property present.
<settings>
 <profiles>
  <profile>
   <id>tycho</id>
   <activation>
    <activeByDefault>false</activeByDefault>
    <property>
     <name>tycho-build</name>
    </property>
  </activation>
  <repositories>
   <repository>
    <id>eclipse-indigo</id>
    <layout>p2</layout>
    <url>http://download.eclipse.org/releases/indigo</url>
   </repository>
   <repository>
    <id>eclipse-sapphire</id>
    <layout>p2</layout>
    <url>http://download.eclipse.org/sapphire/0.4.1/repository</url>
   </repository>
   <repository>
    <id>eclipse-scala-ide</id>
    <layout>p2</layout>
   <url>http://download.scala-ide.org/releases-29/milestone/site</url>
  </repository>
  <repository>
   <id>eclipse-gemini-dbaccess</id>
   <layout>p2</layout>
   <url>http://download.eclipse.org/gemini/dbaccess/updates/1.0</url>
   </repository>
  </repositories>
 </profile>
 
 <profile>
  <id>scala</id>
  <activation>
   <activeByDefault>false</activeByDefault>
    <property>
     <name>scala-build</name>
    </property>
   </activation>
  <repositories>
   <repository>
    <id>scala-tools.org</id>
    <name>Scala-tools Maven2 Repository</name>
    <url>http://scala-tools.org/repo-releases</url>
   </repository>
   <repository>
    <id>typesafe</id>
    <name>Typesafe Repository</name>
    <url>http://repo.typesafe.com/typesafe/releases/</url>
   </repository>
  </repositories>
 <pluginRepositories>
  <pluginRepository>
    <id>scala-tools.org</id>
    <name>Scala-tools Maven2 Repository</name>
    <url>http://scala-tools.org/repo-releases</url>
   </pluginRepository>
  </pluginRepositories>
 </profile>
</profiles>
</settings>

Setting up the project – The tycho build

For my project I just used two simple plugins. Nothing fancy here.
  1. Create plugin-project
  2. Add some dependencies
  3. Write some classes in Java
I recommend the following project structure
root-project/
 plugin.core
 plugin.ui
 plugin.xy
go to your root-project folder in your favorite console and use the following command to generate pom.xml with tycho.
mvn org.sonatype.tycho:maven-tycho-plugin:generate-poms -DgroupId=de.mukis -Dtycho.targetPlatform=path/to/target/platform/
which generates a first project for you. A few things to “tweak” as I saw it as a best-practice in most of the other tutorials:
  • Replace all concrete version numbers with property placeholders, e.g 0.12.0 with ${tycho.version}
  • Remove all groupId and version tags in the pom.xml. The parent pom.xml will generate these.
  • Check your folder structure. Tycho infers AND changes your source directory according to your build.properties.
Next add the p2 repositories needed to resolve all dependencies. This is done via the <repository> tag. The full pom.xml is at the end.
Sometimes you have existing OSGi bundles but no p2 repository you can use it. Eclipse PDE has a nice extra feature for you. Features and bundles publisher application. Note: It’s very important that your repository folder has two folder plugins and features.
Now you can run your maven build with
mvn clean package
and you will get a nice packaged osgi bundle.

Setting up the project – The scala build

So now we want to add some Scala classes. Create new source folder src/main/scala and create some classes. Don’t forget to import Scala packages. So your MANIFEST.MF contains something like:
Import-Package: org.osgi.framework;version="1.6.0",
 scala;version="[2.9.0.1,2.9.3.0]",
 scala.collection;version="[2.9.0.1,2.9.3.0]",
 scala.collection.generic;version="[2.9.0.1,2.9.3.0]",
 scala.collection.immutable;version="[2.9.0.1,2.9.3.0]",
 scala.collection.interfaces;version="[2.9.0.1,2.9.3.0]",
 scala.collection.mutable;version="[2.9.0.1,2.9.3.0]",
 scala.collection.parallel;version="[2.9.0.1,2.9.3.0]",
 scala.collection.parallel.immutable;version="[2.9.0.1,2.9.3.0]",
 scala.collection.parallel.mutable;version="[2.9.0.1,2.9.3.0]",
 scala.concurrent;version="[2.9.0.1,2.9.3.0]",
 scala.concurrent.forkjoin;version="[2.9.0.1,2.9.3.0]",
 scala.io;version="[2.9.0.1,2.9.3.0]",
 scala.math;version="[2.9.0.1,2.9.3.0]",
 scala.parallel;version="[2.9.0.1,2.9.3.0]",
 scala.ref;version="[2.9.0.1,2.9.3.0]",
 scala.reflect,
 scala.reflect.generic;version="[2.9.0.1,2.9.3.0]",
 scala.runtime;version="[2.9.0.1,2.9.3.0]",
 scala.text;version="[2.9.0.1,2.9.3.0]",
 scala.util;version="[2.9.0.1,2.9.3.0]",
No there are, too alternatives to build. I choose to add the source folder in my build.properties and exclude the .scala files in my maven pom. The alternative is described here.
We need the maven scala plugin. Add the repository
...
 <repository>
  <id>scala-tools.org</id>
  <name>Scala-tools Maven2 Repository</name>
  <url>http://scala-tools.org/repo-releases</url>
 </repository>
...
 <pluginRepository>
  <id>scala-tools.org</id>
  <name>Scala-tools Maven2 Repository</name>
  <url>http://scala-tools.org/repo-releases</url>
 </pluginRepository>
and to our root pom.xml we add the maven-scala-plugin
<plugin>
 <groupId>org.scala-tools</groupId>
 <artifactId>maven-scala-plugin</artifactId>
 <version>2.15.0</version>
 <executions>
  <execution>
   <id>compile</id>
   <goals>
    <goal>compile</goal>
   </goals>
   <phase>compile</phase>
  </execution>
 
  <execution>
   <id>test-compile</id>
   <goals>
    <goal>testCompile</goal>
   </goals>
   <phase>test-compile</phase>
  </execution>
 
  <execution>
   <phase>process-resources</phase>
   <goals>
    <goal>compile</goal>
   </goals>
  </execution>
 </executions>
</plugin>
There is actually an easier version, but which doesn’t work with circular dependencies.
If you have added the src/main/scala folder in your build.properties, than you have to add another plugin, to prevent tycho from exporting all scala source files.
<plugin>
 <groupId>org.eclipse.tycho</groupId>
 <artifactId>tycho-compiler-plugin</artifactId>
 <version>${tycho.version}</version>
 <configuration>
  <excludeResources>
   <excludeResource>**/*.scala</excludeResource>
  </excludeResources>
 </configuration>
</plugin>
Now the build should work with scala, too.

Setting up the project – APT code generation with Eclipse Sapphire

I’m creating some models with Eclipse Sapphire which uses Java Annotation Processing (APT) to generate the models. Apt-maven-plugin is a maven allows us to trigger a processing factory during the build process. The current version alpha-04 has a bug which leads to an error with java 7. So, before we can use this plugin you have to checkout the source code and build the latest alpha-05 version as it’s not released at the moment. Install it in your local maven repository.
Now you can add the apt-maven-plugin to your plugin which needs apt. This could look like
<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
 
<parent>
 <groupId>de.lmu.ifi.dbs.knowing</groupId>
 <artifactId>Knowing</artifactId>
 <version>0.1.4-SNAPSHOT</version>
</parent>
 
<artifactId>de.lmu.ifi.dbs.knowing.core</artifactId>
<packaging>eclipse-plugin</packaging>
 
<build>
 <plugins>
  <plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>apt-maven-plugin</artifactId>
   <version>1.0-alpha-5-SNAPSHOT</version>
   <executions>
    <execution>
     <goals>
      <goal>process</goal>
     </goals>
    </execution>
   </executions>
   <configuration>
  <factory>org.eclipse.sapphire.sdk.build.processor.internal.APFactory</factory>
   </configuration>
  </plugin>
 </plugins>
</build>
</project>
At last you have  to add the factory as optional dependencies to your MANIFEST.MF of your plugin using apt.
org.eclipse.sapphire.sdk;bundle-version="[0.4.0,0.5.0)";resolution:=optional,
org.eclipse.sapphire.sdk.build.processor;bundle-version="[0.4.0,0.5.0)";resolution:=optional
I you trigger the build, you will see that your apt sources are generated in target/generated-sources/apt. However the files are not compiled. At first I tried the maven-build-helper, but tycho seems to override these settings. So i added target/generated-sources/apt to the build.properties of the plugin using apt, which seems for my as a bad work-around. However it works fine.

Source Code

You can find the code in my github repository.

Conclusion

For a beginner it was not that easy to avoid all little traps with tycho, scala, maven apt. But in the end I hope to safe a lot of time when building and testing.

Things to add

The tutorial doesn’t include any testing.

Links

https://github.com/muuki88/tycho
http://wiki.eclipse.org/Tycho/Reference_Card
http://mattiasholmqvist.se/2010/02/building-with-tycho-part-1-osgi-bundles/
https://github.com/misto/Scala-Hello-World-Plug-in
Compiling circular dependent java-scala classes
Eclipse sapphire and tycho
compile generated sources
http://mojo.codehaus.org/apt-maven-plugin/
APT M2E Connector
Publish pre-compiled bundles in p2 repository

Akka and OSGi development in Eclipse

This short tutorial is about how to run akka in an OSGi environment. I faced
a lot of problems deploying in this in plain eclipse without maven, bnd or sbt.

This example is done with the java-API, however it is also possible with Scala.

Requirements

  • Eclipse Helios 3.6.2 with Scala-Plugin
  • akka-1.1-modules distribution

Configuration

First we have to do some minor changes in some Manifest files in the akka project.

  1. Extract akka-modules-1.1.zip, e.g ~/akka
  2. go to akka/lib_managed/compile
  3. open akka-actor-1.1.jar -> META-INF/MANIFEST.MF
  4. delete following line: private-package: *
  5. Do the same with akka-typed-actor-1.1.jar

Second you have to setup a target-platform which is used to run the OSGi environment.

  1. Go to windows->Preferences->Plugin development->Target Platform
  2. Add target platform, use default
  3. Extract your akka-modules-1.1.zip, e.g ~/akka
You need the follow plugins:
  1. guice-all-2.0.jar
  2. logback-classic-0.9.24.jar
  3. logback-core-0.9.24.jar
  4. slf4j-api-1.6.0.jar
  5. Aspectwerkz by Jonas Bonér

The bundle

Create a new plugin project. No contributions to the UI and an activator class.

Copy the following libs into your bundle and add them to your classpath in MANIFEST.MF

  • akka-actor-1.1.jar
  • akka-typed-actor-1.1.jar
  • akka-slf4j-1.1.jar

Create a class MyActor

import akka.actor.UntypedActor;
 
public class MyActor extends UntypedActor {
 
	@Override
	public void onReceive(Object msg) throws Exception {
		System.out.println("Message: " + msg);
	}
 
}

Add these lines to your Activator class.

 

import akka.actor.ActorRef;
import akka.actor.Actors;
 
//...
 
	public void start(BundleContext bundleContext) throws Exception {
		Activator.context = bundleContext;
		ActorRef actor = Actors.actorOf(MyActor.class).start();
		actor.sendOneWay("Hello You");
	}

At last you have to edit the MANIFEST.MF.  It should look something like this. (I know
I may have to the smallest set of import scala packages).

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Core
Bundle-SymbolicName: de.lmu.ifi.dbs.knowing.core;singleton:=true
Bundle-Version: 1.0.0.qualifier
Bundle-Activator: de.lmu.ifi.dbs.knowing.core.internal.Activator
Require-Bundle: org.eclipse.core.runtime,
 se.scalablesolutions.akka.actor;bundle-version="1.0.0",
 se.scalablesolutions.akka.stm;bundle-version="1.0.0",
 se.scalablesolutions.akka.typed.actor;bundle-version="1.0.0"
Bundle-ActivationPolicy: lazy
Bundle-RequiredExecutionEnvironment: JavaSE-1.6
Bundle-ClassPath: .
Import-Package: scala;version="2.9.0.1",
 scala.collection;version="2.9.0.l",
 scala.collection.generic;version="2.9.0.1",
 scala.collection.immutable;version="2.8.1.final",
 scala.collection.interfaces;version="2.8.1.final",
 scala.collection.mutable;version="2.8.1.final",
 scala.compat;version="2.8.1.final",
 scala.concurrent;version="2.8.1.final",
 scala.concurrent.forkjoin;version="2.8.1.final",
 scala.io;version="2.8.1.final",
 scala.math;version="2.8.1.final",
 scala.mobile;version="2.8.1.final",
 scala.ref;version="2.8.1.final",
 scala.reflect;version="2.8.1.final",
 scala.reflect.generic;version="2.8.1.final",
 scala.runtime;version="2.8.1.final",
 scala.text;version="2.8.1.final",
 scala.util;version="2.8.1.final",
 scala.util.automata;version="2.8.1.final",
 scala.util.continuations;version="2.8.1.final",
 scala.util.control;version="2.8.1.final",
 scala.util.grammar;version="2.8.1.final",
 scala.util.matching;version="2.8.1.final",
 scala.util.parsing.ast;version="2.8.1.final",
 scala.util.parsing.combinator;version="2.8.1.final",
 scala.util.parsing.combinator.lexical;version="2.8.1.final",
 scala.util.parsing.combinator.syntactical;version="2.8.1.final",
 scala.util.parsing.combinator.testing;version="2.8.1.final",
 scala.util.parsing.combinator.token;version="2.8.1.final",
 scala.util.parsing.input;version="2.8.1.final",
 scala.util.parsing.json;version="2.8.1.final",
 scala.util.parsing.syntax;version="2.8.1.final",
 scala.util.regexp;version="2.8.1.final"

Now let’s run this!

Launch configuration

  1. Open Run->Launch configurtions.
  2. Create a new OSGi Launch configuration
  3. Add the following bundles
    1. org.scala-ide.scala.library (2.8.1) (the akka scala library didn’t work for me)
    2. se.scalablesolutions.akka.actor
    3. se.scalablesolutions.akka.osgi.dependencies.bundle
    4. se.scalablesolutions.akka.actor.typed.actor
    5. se.scalablesolutions.akka.actor.stm
    6. com.google.inject
    7. Equinox Runtime Components (e,g eclipse.runtime.core,..)
  4. Try to launch

 

Hope this works for you, too!

Eclipse Gemini JPA Tutorial

After my test I will start writing a tutorial with a sample application for the Eclipse Gemini Project.

Currently you can find the checkout the SVN Repository under:
https://svn.cip.ifi.lmu.de/~seilern/svn/org.eclipse.gemini.jpa

Good luck,
Muki

UI Extension via Extension Points in Eclipse RCP

Eclipse has a powerful mechanism to allow Plugins to contribute to the UI: Extensions and Extension Points. There are a lot of excellent tutorials like Eclipse Extensions by Lars Vogel on the internet. However this little tutorial is about how to contribute  to an Editor (in this case an additional TabItem).

1. The Extension Interface

First we have to create an Interface which the Extension has to implement. To create an additional Tab in an Editor I created an Interface like this:

public interface IEditorTabExtension {
 
	/**
	 * Is called to create the tab control
	 * @param parent
	 * @return Control - The created Control
	 */
	public Control createContents(Composite parent);
 
	/**
	 * Should be called by the doSave method in
	 * the root EditorPart
	 *
	 * @param monitor
	 */
	public void doSave(IProgressMonitor monitor);
 
	/**
	 * Call-by-Reference dirty boolean. Indicates
	 * if changes were made.
	 *
	 * @param dirty
	 */
	public void setDirty(Boolean dirty);
 
	/**
	 *
	 * @return Name for the Tab
	 */
	public String getName();
}

2. Create the Extension Point

First we create an Extension Point in the plugin.xml via the plugin.xml Editor.

Create Extension Point

The Extension-Schema Editor should now open automatically. Otherwise there’s a button.
Add a new Element and call it “tab”. Now add a new attribute and name it “class”. Type should be “java” and Implements
our IEditorTabExtension. Don’t forget to create a new attribute Choice in the “extension” element. And in there an
“Tab” entry. Now it should look like this:

Extension Point Elements

3. Create an Extension and provide it

Our Plugin can not only provide an extension point, it provides an extension too. Feel free to
implement the Interface with an UI you like. To register this Extension open the plugin.xml
and the Extensions Tab. Add our new Extension Point de.mukis.editor.EditorTabExtension.
Should look like this:

Provide Extension

4. Evaluate Contribs and add it to the Editor

private IEditorTabExtension[] extensions;
 
	@Override
	public void doSave(IProgressMonitor monitor) {
		dirty = false;
		for(IEditorTabExtension e : extensions)
			e.doSave(monitor);
		firePropertyChange(IWorkbenchPartConstants.PROP_DIRTY);
	}
 
	@Override
	public void createPartControl(Composite parent) {
		folder = new TabFolder(parent, SWT.BORDER);
 
		extensions = evaluateTabContribs();
		for (IEditorTabExtension e : extensions) {
			TabItem tab = new TabItem(folder, SWT.BORDER);
			tab.setText(e.getName());
			tab.setControl(e.createContents(folder));
			System.out.println("Tab added");
		}
 
	}
 
 private IChildEditorTabExtension[] evaluateTabContribs() {
		IConfigurationElement[] config = Platform.getExtensionRegistry()
				.getConfigurationElementsFor(TAB_ID);
		final LinkedList list = new LinkedList();
		try {
			for(IConfigurationElement e : config) {
				System.out.println("Evaluation extension");
				final Object o = e.createExecutableExtension("class");
				if(o instanceof IEditorTabExtension) {
					ISafeRunnable runnable = new ISafeRunnable() {
 
						@Override
						public void handleException(Throwable exception) {
							System.out.println("Exception in Tab");
						}
 
						@Override
						public void run() throws Exception {
							IEditorTabExtension tab = (IEditorTabExtension)o;
							list.add(tab);
							System.out.println("Extension detected: " + tab.getName());
						}
					};
					SafeRunner.run(runnable);
				}
			}
		} catch(CoreException ex) {
			System.out.println(ex.getMessage());
		}
		return list.toArray(new IChildEditorTabExtension[list.size()]);
	}

This is very basic. The isDirty flag solution isn’t very smart. We use the Call-by-Reference effect
to provide a “global” Boolean.

Thanks to Lars Vogel’s tutorials which inspired me to to my own stuff  and have been used
for this tutorial.