Playframework and RequireJS

RequireJS Logo

As a backend developer I like tools that help me to structure my code. Doing more and more frontend stuff I finally got time to learn some of the basics of RequireJS. Unfortunately the tutorials how to compose playframework and requireJS with multipage applications is not too big. There’s is some with AngularJS, but I didn’t want to port my applications to two new systems.

Application structure

For a sample application, I implemented to pages:

Both will have their own data-entry-point and dependencies. The index page looks like this

@(message: String)
 
@main("RequireJS with Play") {
    // html here
 
 @helper.requireJs(core = routes.Assets.at("javascripts/require.js").url,
                   module = routes.Assets.at("javascripts/main/main").url)
 
}

The routes file is very basic, too:

GET    /                controllers.Application.index
GET    /dashboard       controllers.Application.dashboard
POST   /api/sample      controllers.Application.sample
 
### Additions needed
GET    /jsroutes.js     controllers.Application.jsRoutes()
### Enable www.WebJars.org based resources to be returned
GET    /webjars/*file   controllers.WebJarAssets.at(file)
GET    /assets/*file    controllers.Assets.at(path="/public", file)

The javascript folder layout

  • assets/javascripts
    • common.js
    • main.js
    • dashboard
      • chart.js
      • main.js
    • lib
      • math.js

How does it work?

First you define a file common.js, which is used to configure requirejs.

(function(requirejs) {
    "use strict";
 
    requirejs.config({
        baseUrl : "/assets/javascripts",
        shim : {
            "jquery" : {
                exports : "$"
            },
            "jsRoutes" : {
                exports : "jsRoutes"
            }
        },
        paths : {
            "math" : "lib/math",
            // Map the dependencies to CDNs or WebJars directly
            "_" : "//cdnjs.cloudflare.com/ajax/libs/underscore.js/1.5.1/underscore-min",
            "jquery" : "//localhost:9000/webjars/jquery/2.0.3/jquery.min",
            "bootstrap" : "//netdna.bootstrapcdn.com/bootstrap/3.0.0/js/bootstrap.min",
            "jsRoutes" : "//localhost:9000/jsroutes"
        // A WebJars URL would look like
        // //server:port/webjars/angularjs/1.0.7/angular.min
        }
    });
 
    requirejs.onError = function(err) {
        console.log(err);
    };
})(requirejs);

The baseUrl is important, as this will be the root path from now on. IMHO this makes things easier than, relative paths.

The shim configuration is used to export your jsRoutes, which is defined in my Application.scala file. Of course you can add as many as you want.

The paths section is a bit tricky. Currently it seems there’s no better way than hardcoding the urls, like “jsRoutes” : “//localhost:9000/jsroutes”, when you use WebJars.

Define and Require

Ordering is crucial! For my /dasbhoard page the /dasbhoard/main.js is my entry point

// first load the configuration
require(["../common"], function(common) {
   console.log('Dashboard started');
 
   // Then load submodules. Remember the baseUrl is set:
   // Even you are in the dasboard folder you have to reference dashboard/chart
   // directly
   require(["jquery", "math", "dashboard/chart"], function($, math, chart){
       console.log("Title is : " + $('h1').text());
       console.log("1 + 3 = " + math.sum(1,3));
       console.log(chart);
 
       chart.load({ page : 'dashboard'}, function(data){
           console.log(data);
       }, function(status, xhr, error) {
           console.log(status);
       });
 
   });
});

For the chart.js

// first the configuration, then other dependencies
define([ "../common", "jsRoutes" ], {
    load : function(data, onSuccess, onFail) {
        var r = jsRoutes.controllers.Application.sample();
        r.contentType = 'application/json';
        r.data = JSON.stringify(data);
        $.ajax(r).done(onSuccess).fail(onFail);
    }
})

Links

Future Composition with Scala and Akka

Future Composition with Scala and Akka

Scala is functional and object-oriented language, which runs on the JVM. For concurrent and/or parallel programming it is a suitable choice along with the Akka framework, which provides a rich toolset for all kind of concurrent tasks. In this post I want to show a little example how to schedule a logfile-search job on multiple files/servers with Futures and Actors.

Setup

I created my setup with the Typesafe Activator Hello-Akka template. This results in a build.sbt file with the following content:

name := """hello-akka"""
 
version := "1.0"
 
scalaVersion := "2.10.2"
 
libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % "2.2.0",
  "com.typesafe.akka" %% "akka-testkit" % "2.2.0",
  "com.google.guava" % "guava" % "14.0.1",
  "org.scalatest" % "scalatest_2.10" % "1.9.1" % "test",
  "junit" % "junit" % "4.11" % "test",
  "com.novocode" % "junit-interface" % "0.7" % "test->default"
)
 
testOptions += Tests.Argument(TestFrameworks.JUnit, "-v")

Scala build-in Futures

Scala has already a build-in support for Futures. The implementation is based on java.util.concurrent. Let’s implement a Future which runs our log search.

import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits._
 
object LogSearch extends App {
 
println("Starting log search")
 
val searchFuture = future {
  Thread sleep 1000
  "Found something"
}
 
println("Blocking for results")
  val result = Await result (searchFuture, 5 seconds)
  println(s"Found $result")
}

This is all we need to run our task in another thread. The implicit import from ExecutionContext provides a default ExecutionContext which handles the threads the future is running on. After creating the future we wait with a blocking call Await result for our results. So far nothing too fancy.

Future composition

There are a lot of examples where the for-yield syntax is used to compose future results. In our case we have a dynamic list of futures: the log search results from each server.

For testing future capabilities we will create a list of futures from a list of ints which represent the time the task will run. Types are just for clarification.

val tasks = List(3000, 1200, 1800, 600, 250, 1000, 1100, 8000, 550)
val taskFutures: List[Future[String]] = tasks map { ms =>
  future {
    Thread sleep ms
    s"Task with $ms ms"
  }
}

In the end, we want a List[String] as a result. This is done with the Futures companion object.

val searchFuture: Future[List[String]] = Future sequence taskFutures

And finally we can wait for our results with

val result = Await result (searchFuture, 2 seconds)

However this will throw a TimeoutException, as some of our tasks run more than 2 seconds. Of course we could increase the timeout, but there error could always happen again, when a server is down. Another approach would be to handle the exception and return an error. However all other results would be lost.

Future – Timeout fallback

No problem, we generate a fallback, which will return a default value if the operation takes, too long. A very naive implementation for our fallback could look like this

def fallback[A](default: A, timeout: Duration): Future[A] = future {
  Thread sleep timeout.toMillis
  default
}

The fallback future will return after the executing thread has sleeped for the timeout duration. The calling code now looks like this.

val timeout = 2 seconds
val tasks = List(3000, 1200, 1800, 600, 250, 1000, 1100, 8000, 550)
val taskFutures: List[Future[String]] = tasks map { ms =>
val search = future {
  Thread sleep ms
  s"Task with $ms ms"
}
 
Future firstCompletedOf Seq(search,
  fallback(s"timeout $ms", timeout))
}
 
val searchFuture: Future[List[String]] = Future sequence taskFutures
 
println("Blocking for results")
val result = Await result (searchFuture, timeout * tasks.length)
println(s"Found $result")

The important call here is Future firstCompletedOf Seq(..) which produces a future returning the result of the first finished future.

This implementation is very bad as discussed here. In short: We are wasting CPU time by putting threads to sleep. Also the blocking call timeout is more or less a guess. With a one-thread scheduler it can actually take more time.

Futures and Akka

Now let’s do this more performant and more robust. Our main goal is to get rid of the poor fallback implementation, which was blocking a complete thread. The idea is now to schedule the fallback feature after a given duration. By this you have all threads working on real, while the fallback future execution time is almost zero. Java has a ScheduledExecutorService on it’s own or you can use a different implementation, a HashedWheelTimer, by Netty. Akka used to use the HashWheelTimer, but has now a own implementation.

So let’s start with the actor.

import akka.actor._
import akka.pattern.{ after, ask, pipe }
import akka.util.Timeout
 
class LogSearchActor extends Actor {
 
  def receive = {
    case Search(worktimes, timeout) =>
      // Doing all the work in one actor using futures
      val searchFutures = worktimes map { worktime =>
      val searchFuture = search(worktime)
      val fallback = after(timeout, context.system.scheduler) {
          Future successful s"$worktime ms > $timeout" 
        }
        Future firstCompletedOf Seq(searchFuture, fallback)
      }
 
      // Pipe future results to sender
      (Future sequence searchFutures) pipeTo sender
    }
 
  def search(worktime: Int): Future[String] = future {
      Thread sleep worktime
      s"found something in $worktime ms"
  }
}
 
case class Search(worktime: List[Int], timeout: FiniteDuration)

The important part is the after method call. You give it a duration after which the future should be executed and as a second parameter the scheduler, which is the default one of the actor system in our case. The third parameter is the future which should get executed. I use the Future success companion method to return a single string.

The rest of the code is almost identical. PipeTo is a akka pattern to return results of a future to the sender. Nothing fancy here.

Now how to call all this. First the code

object LogSearch extends App {
 
println("Starting actor system")
val system = ActorSystem("futures")
 
println("Starting log search")
try {
  // timeout for each search task
  val fallbackTimeout = 2 seconds
 
  // timeout use with akka.patterns.ask
  implicit val timeout = new Timeout(5 seconds)
 
  require(fallbackTimeout < timeout.duration) 
 
  // Create SearchActor 
  val search = system.actorOf(Props[LogSearchActor]) 
 
  // Test worktimes for search 
  val worktimes = List(1000, 1500, 1200, 800, 2000, 600, 3500, 8000, 250) 
 
  // Asking for results 
  val futureResults = (search ? Search(worktimes, fallbackTimeout)) 
    // Cast to correct type 
    .mapTo[List[String]] 
    // In case something went wrong 
    .recover { 
       case e: TimeoutException => List("timeout")
       case e: Exception => List(e getMessage)
  }
  // Callback (non-blocking)
  .onComplete {
      case Success(results) =>
         println(":: Results ::")
         results foreach (r => println(s" $r"))
         system shutdown ()
      case Failure(t) =>
         t printStackTrace ()
      system shutdown ()
  }
 
} catch {
  case t: Throwable =>
  t printStackTrace ()
  system shutdown ()
}
 
  // Await end of programm
  system awaitTermination (20 seconds)
}

The comments should explain most of the parts. This example is completly asynchronous and works with callbacks. Of course you can use the Await result call as before.

Links

https://gist.github.com/muuki88/6099946
http://doc.akka.io/docs/akka/2.1.0/scala/futures.html
http://stackoverflow.com/questions/17672786/scala-future-sequence-and-timeout-handling
http://stackoverflow.com/questions/16304471/scala-futures-built-in-timeout

DRY or DIY

There are always some methods, classes or helpers you don’t find in your programming language, which would be useful to save boilerplate code or needed a lot in your programme. Now you are confronted to decided whether you implement this particular function yourself ( do-it-yourself ) or use a third party library ( don’t-repeat-yourself). I sometimes have awesome discussions with my boss, which approach we should use. The following is a comparison of both approaches with pros and coins.

DRY | Don’t-Repeat-Yourself

The main question is, why should I invent the wheel again. There are a lot of good libraries out there like Google Guava, Apache Commons and many others for more specialized use cases. Before taking a closer look to the library, we consider the following points

  • When was the last source code update? Is the libary still maintained?
  • Are there documentations and examples how to use the library
  • Is there any kind of community

This part of a list described on Java Code Geeks. If we cannot satisfy at least 2 of the 3 points, we won’t choose the library and look for a different one. Now we would take a closer look at our library, read tutorials and the documentation and use the one function we where missing in our current library/language set. Often we do this by writing tests assuring the library does what we want it to do. This is crucial as if you update the library all your functions will be tested. Joda Datetime for example implements some RFC standards, which is pretty awesome. However you have to read the RFC to know exactly what is going on or you just test the methods you need.

Lets summairze the benefits from using a third party library

  • Less development time
  • Pretested functionality
  • Less maintainance support

DIY | Do-It-Yourself

Sometimes you just need this one little method. However it shippes with a big library with features you never even heard of. This library satisfies every standard you set, but it is too big (no matter if you mean size, features or maintaince overhead). Even if the library isn’t too big, maybe you just don’t like it in case of API or code style. So you start to write it yourself and it will be part of your product.

Summarize

Task DRY DIY Description
Development Learn the API Write API The more complex the library and the worse the documentation the harder it's to learn an API than to write your own
Tests Test library API you use Test your own Implementation Always need them
Maintainance Community Support Alone When you use an 3. party open source library you should give something back
Happiness a bit even more Developers are lazy, but they don't want to read API docs. Writing code on their own makes them even more happy *sigh*

Leistungsschutzrecht – Phantasien

Das Leistungsschutzrecht hat vorerst den Bundestag passiert. Wenn auch die aktuelle Version zu Gunsten der Internetgemeinde abgeschwächt wurde, bleibt das ganze Vorhaben immer noch mehr als fraglich. Mögliche Folgen, wahrscheinlich wie unwahrscheinlich möchte ich deshalb darstellen.

Alles aus

Die Auslegung von “kurzer Textausschnitt” wird so restriktiv, dass keine sinnvolle Anzeige mehr möglich ist.

Google wird versuchen den Verlagen, die bereits bestehende technische Möglichkeit erklären, wie ihre Seiten nicht indizierbar gemacht werden und so nicht in den Suchmaschinen auftauchen. Die Verlage lehnen diese Möglichkeit ab, weil so kein Geld zu verdienen ist. Daraufhin wird eine schwarze Liste nach belgischem Vorbild für alle großen Suchmaschinenanbieter angelegt, die verhindert, dass Seiten von Verlagen gescannt werden.

An den vielen gerichtlichen Verhandlungen zwischen Suchmaschinenbetreiber und Verlagen wegen Wettbewerbsverzerrung verdienen nur die Anwälte. Am Ende bleibt ein dieser Link kann in ihrem Land nicht angezeigt werden. Deutsche Leser finden ihre Nachrichten auf ausländischen Portalen und Verlage jammern, dass sie mehr Geld den je in Suchmaschine Werbung und Optimierung stecken müssen.

Die deutsche und damit zweitgrößte Wikipedia würde für unbestimmte Zeit unter Wartungsarbeiten stehen, weil alle Artikel nach vermeintlich widerrechtlich benutzen Zitaten, Links und Textausschnitten durchsucht werden muss.

Facebook, Twitter, Google+ und Co werden in Deutschland entweder

- kostenpflichtig, so dass eine pauschale an Verlage gezahlt wird, um jedem Nutzer uneingeschränktes teilen, linken, retweeten ermöglicht wird.
- oder ihre AGBs derart erweitern, dass User selbst haften für geteilte Inhalte und Verlage, die die Plattform nutzen damit einverstanden sind, dass ihre Inhalte geteilt werden.

Apps und Programme wie Thunderbird, Google Currents oder einfache RSS Reader werden kriminalisiert und illegal, weil sie die Inhalte von Verlagswebsiten anzeigen. Kurz bevor gegen Browser an sich gehetzt wird, bemerken auch viele Verlage wie lächerlich diese Kampagnen sind, angesicht drastisch sinkender Leser im Onlinebereich, die nicht durch Leser im Printsektor ausgeglichen werden.

Nutzlos

Die Auslegung von kurzer Textausschnitt reicht aus, so dass Newsaggregatoren, wie Suchmaschinen weiterhin wie gehabt arbeiten können. Das Teilen in sozialen Netzwerken bleibt dadurch weiterhin unbehindert, mit Ausnahme von kritischen Stimmen. Hier wird das Leistungsschutzrecht genutzt, um diese Stimmen soweit es geht zu unterdrücken. ACTA lässt grüßen.

Innovation

Das Gesetz wird abgelehnt und stattdessen, wie in Frankreich ein Deal zwischen Suchmaschinenbetreibern und Verlagen ausgehandelt. Es ist vieles anders und nicht leicht, aber neue Geschäftsmodelle sind auch in anderen Branchen entstanden, die dadurch langsam wieder in Schwung kommen.

Zebracar, Drive Now und Flinkster rechnen die Autonutzung im Minuten und Kilometertakt ab. Kann ich mir nicht auch meine Zeitung für eine Stunde Mittagspause mieten?

Adobe Photoshop, Musik oder Videos kann ich mir im Abo holen ( ja, Kultur-Flatrate, nichts anderes ist die GEZ Pauschale) oder pay-per-use. Warum nicht auch Zeitungsartikel?

Meinen Handytarif kann ich mir individuell zusammen klicken. Warum nicht meine Zeitung? Technisch zu aufwändig oder ist der Quartalsdruck zu hoch.

Es wiederholt sich alles. Musik, Film, jetzt die Verlagsbranche. Das Internet macht viele alte Geschäftsmodelle obsolet. Diese aber mit Gesetzen, und vor allem Insellösungen zu schützen, ist kontraproduktiv.

Maven Reports in Jenkins

Code quality is a sensitive topic. It affects your maintenance cost as well as your customer satisfaction. Not to mention your developers motivation to work with the code. Who wants to fix ugly code, right?

Discussing code quality always needs hard facts and numbers! So this is a short tutorial how to create some simple reports to analyze some code quality metrics.

Reports

This section will shorty explain the used reports.

Findbugs

FindBugs looks for bugs in Java programs. It is based on the concept of bug patterns. A bug pattern is a code idiom that is often an error

FindBugs Analysis

FindBugs Analysis

Checkstyle

Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard. It automates the process of checking Java code to spare humans of this boring (but important) task. This makes it ideal for projects that want to enforce a coding standard.

Checkstyle Analysis

Checkstyle Analysis

Cobertura Code Coverage

Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage. It is based on jcoverage.

Cobertura Report

Cobertura Report

Surefire Test Report

The Surefire Plugin is used during the test phase of the build lifecycle to execute the unit tests of an application. It generates reports…

Surefire Testreport

Surefire Testreport

Basic pom.xml

Starting with a basic pom configuration:

<project>
 
  ...
  <properties>
     <findbugs.version>2.5.2</findbugs.version>
     <checkstyle.version>2.9.1</checkstyle.version>
     <surefire.reportplugin.version>2.12.4</surefire.reportplugin.version>
     <cobertura.version>2.5.2</cobertura.version>
  </properties>
 
  <build>
     <plugins>
        <plugin>
           <groupId>org.codehaus.mojo</groupId>
           <artifactId>findbugs-maven-plugin</artifactId>
           <version>${findbugs.version}</version>
        </plugin>
        <plugin>
           <groupId>org.codehaus.mojo</groupId>
           <artifactId>cobertura-maven-plugin</artifactId>
           <version>${cobertura.version}</version>
           <configuration>
               <formats>
                   <format>xml</format>
               </formats>
           </configuration>
        </plugin>
     </plugins>
  </build>
 
  <reporting>
     <plugins>
        <plugin>
           <groupId>org.codehaus.mojo</groupId>
           <artifactId>findbugs-maven-plugin</artifactId>
           <version>${findbugs.version}</version>
        </plugin>
        <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-checkstyle-plugin</artifactId>
           <version>${checkstyle.version}</version>
        </plugin>
        <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-surefire-report-plugin</artifactId>
           <version>${surefire.reportplugin.version}</version>
        </plugin>
        <plugin>
           <groupId>org.codehaus.mojo</groupId>
           <artifactId>cobertura-maven-plugin</artifactId>
           <version>${cobertura.version}</version>
           <configuration>
               <formats>
                   <format>xml</format>
               </formats>
           </configuration>
        </plugin>
      </plugins>
   </reporting>
</project>

 

Jenkins Plugins

You need to install a few jenkins plugins to get a nice integration with your reports.

Project Configuration

Now you need to configure your project to show the results of your reports.

Findbugs and Checkstyle

FindBugs and Checkstyle

FindBugs and Checkstyle

 

You can configure them in the “build configuration” tab. There are some limits to set, which influence the representation.

Cobertura

Cobertura Config

Cobertura Config

 

Cobertura is configured in the “post-build actions”. Same configurations as in the findbugs and checkstyle plugin.

Result

On your main page of your project you have some new graphs and links.

Jenkins Trend Graphs

Jenkins Trend Graphs

Jenkins Navbar

Jenkins Navbar

 

MySql Timezones in the Cloud

On a small university project I found my self developing a web application with play, mysql and some javascript libraries. After testing and developing on my local machine I want to deploy my application.

Have heard of Openshift? It’s an amazing PaaS product by RedHat. It’s currently in Developer Preview and you can test it for free. To deploy it, follow  this amazing good tutorial.

What happend to my 24/7 chart?

The correct visualization

The correct visualization

The incorrect visualization

The incorrect visualization

Timezones

Openshift uses Amazones EC2 service. In particular the servers are located in the US-East region. But that should be too hard to change, or?

  1. Install PhpMyAdmin cartrige
  2. Log into your app with ssh and import time zone tables to mysql
    mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u admin -p mysql
  3. Login as admin in PhpMyAdmin
  4. Set global timezone with
    -- Set correct time zone
    SET GLOBAL time_zone = 'Europe/Berlin';
    -- check if time zone is correctly set
    SELECT version( ) , @@time_zone , @@system_time_zone , NOW( ) , UTC_TIMESTAMP( );

Happy timezone :)

Links

JUnit Benchmarking

Benchmarks are important if you have a performance critical application. Even if your application is not
performance critical, a better performance is always nice to have.

To check that your application works correctly unit tests with JUnit are the first choice in the Java world.
So why don’t use unit tests to check our performance? There is an awesome small library called JUnitBenchmarks which enables you to do that easily. There are some other frameworks like Google Caliper, but you don’t get the entire JUnit comfort.

Getting started

For the simple start go to the JUnitBenchmarks Tutorial page, which is very good. There is no black magic behind the scenes.

Parameterized Benchmarks

Google Caliper has a nice feature to run the benchmarks with different types of parameters. JUnitBenchmark doesn’t need that, as JUnit supports this out of the box. Taking the first very simple benchmark from JUnitBenchmark, a parameterized test could look like this:

import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;
 
import com.carrotsearch.junitbenchmarks.AbstractBenchmark;
import com.carrotsearch.junitbenchmarks.BenchmarkOptions;
 
@RunWith(Parameterized.class)
public class MyTest extends AbstractBenchmark {
 
  private final long sleep;
 
  public void MyTest(long sleep) {
    this.sleep = sleep;
  }
 
  @Parameters
  public static Collection&lt;Object[]&gt; data() {
    // The actual parameters
    Object[][] data = new Object[][] { { 20 }, { 50 }, { 100 } };
    return Arrays.asList(data);
  }
 
  @Test
  public void testSleep() throws Exception {
    Thread.sleep(sleep);
  }
}

Now you can run your benchmarks with different settings.

Links

Developing OpenSource

Developing OpenSource is fun and is supported by a wide range of companies with free infrastructure.
In this articel I want to show how you can build a free development environment for your open source
project. Some stuff is Java (or more general JVM) specific, but some stuff can be applied to any project.

Source Code Hosting

There are three big hoster I know, which do a real good job: GitHub, GoogleCode and Bitbucket.
For my projects I prefer GitHub as it focuses only on git and provide a good integration with some
other tools I use.

What GitHub offers

  1. Obviously a git repository to host your source code
  2. A nice landing page on your repo (README.md), which is always nice
  3. Optional wiki (awesome for tutorials)
  4. Optional issue tracker (just a small set of features, but enough for most projects)
    1. Milestones, Label and schedule support
    2. Eclipse Mylyn integration with the issue tracker (Markdown WikiText support will be added soon)
  5. Pull Request, which is great feature for collaboration with external contributors
  6. GitHub Pages (more on that later)
  7. Android App

So basically you have everything to get started within GitHub. 300Mb free space (soft limit) for
your projects. You can really code a lot with 300Mb.

Project Build

Using a build tool should be obligatory for an open source project. Let explain why:

IDE files
Of course, it’s easier for you to checkout your IDE files inside the repository, because at the beginning it will only be you, coding in your project. However your favorite IDE maybe not my favorite IDE and now you must create all the IDE specific files yourself, which can be sometimes a though task.

Unique build process
There are no excuses “my project doesn’t” compile as the build process is describe in a build file and
will be executed the identical way on every machine. Of course this implies you don’t hardcode some
paths, which only exists on your machine.

Continuous Integration
Ah, such a magic word.  Continuous Integration isn’t possible without a build tool, as every build server
needs some hints how to build this project.

 Maven

Maven is my favorite build tool. I know there a plenty of other build tools outside like sbt, gradleant + ivy.
However as far as I experience some of them, maven is more verbose, but has a huge ecosystem with
a lot of tutorials, plugins and nice features. Some of them are

  1. One build file pom.xml
  2. Good IDE integration for most IDEs, but commandline is handsome, too.
  3. Writing your own plugins is straightforward
  4. Open source repository server Nexus OSS
  5. Maven Central to publish your projects
  6. Site generation for an easy project site.

Continuous Integration

Pushing your changes to repository doesn’t necessary involve that you run all your tests. However
this should be done! For this travis-ci is great platform. It integrates very smoothly  with github.
You see the build-status of your project on pull-request and can integrate the build-status very
easily in your README.md or on any other website.

Deploying your stuff

Of course at some point, you want to publish your stuff. Upload your jars is a possible way
and should be done, as nobody wants  to compile your library for himself (yeah, so guys really
love this, but that’s not the majority). Maven Central is definitively  the right place to do this.
There is very good tutorial how to get an account and publish your stuff. The process includes
PGP signature generation and getting a bit more familiar with maven release process, but it’s
definitely  worth it. And here the circle closes. You can use your projects now in any other
project you create like any other can.

A small website would be nice

If the github README and the wiki is not enough, github has one more gift for your: Github Pages

As we use maven, we can create a nice site with mvn site which can be  customized and some
useful reports such as unitTest, checkstyle and findbugs can be added.

Mailinglist

GoogleGroups is the first choice. Found nothing better yet.

Show me a project

My current project I spend time on can be found here. Only the mailing list is currently not on
google groups as there is a legacy mailing list.

Simple JUnit Tests with Tycho and Surefire

Eclipse Tycho requires a special packaging type for test bundles, eclipse-test-plugin. This is okay, when you have your own eclipse based project with all the modularity you want. However sometimes you have legacy libraries or want to keep your source code and test code close to each other and don’t want to create another plugin to run the tests, like in this project.

Tycho got a sure-fire plugin which doesn’t cover this case. So you need to configure good old maven surefire plugin for your needs. Before explaining, this is what the important part of the pom.xml looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<!-- plain surefire tests without tycho -->
<testSourceDirectory>src/test/java</testSourceDirectory>
<plugins>
  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.12.4</version>
    <executions>
      <execution>
        <id>test</id>
        <phase>test</phase>
        <configuration>
          <includes>
            <include>**/*Test.java</include>
          </includes>
        </configuration>
        <goals>
          <goal>test</goal>
        </goals>
      </execution>
    </executions>
  </plugin>
 
  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>2.5.1</version>
    <executions>
      <execution>
        <id>compiletests</id>
        <phase>test-compile</phase>
        <goals>
          <goal>testCompile</goal>
        </goals>
      </execution>
    </executions>
  </plugin>
  1. You must specifiy the test directory (src/test/java).
  2. You have to bind the maven compiler plugin to the test-compile phase so this directory gets compiled
  3. Activate the maven sure-fire-plugin

Thanks to this mailing-list post.

PDE target platform cache path

Sometimes you build a broken bundle and publish it on a local update site for some of your colleges. However you’re colleges have already set up their target platform and Eclipse PDE cached the bundles. PDE seems to be really smart when it comes to use cached bundles. Deleting and resetting the target platform didn’t work for me. So I want to replace the bundle in the cache, but where is the folder?

Short answer:

{workspace}/.metadata/.plugins/org.eclipse.pde.core/.bundle_pool/plugins/

How I found it:

  1. Go to your workspace
  2. find -name ‘bundle.name*’ which results in the caching directory and the place of your file
  3. Replace the incorrect bundle in your cache

Note:

This is a short hack in development environments. If a release is broken you should realize this before you publish the site. And if it happens, then update the version of your broken bundle an republish, so the newer version is fetched.