Monitoring Akka with Kamon


I like the JVM a lot because there are a lot of tools available for inspecting a running JVM
instance at runtime. The Java Mission Control (jmc) is one of my favorite tools, when it comes
to monitor threads, hot methods and memory allocation.

However these tools are of limited use, when monitoring an event-driven, message-based system
like Akka. A thread is almost meaningless as it could have processed any kind of message. Luckly
there are some tools out there to fill this gap. Even though the Akka docs are really extensive and
useful, there isn’t a lot about monitoring.

I’m more a Dev than a Ops guy, so I will only give a brief and “I thinks it does this” introduction to
the monitoring-storage-gathering-displaying-stuff.

The Big Picture

First of all, when we are done we will have this infrastructure running


Thanks to docker we don’t have to configure anything on the right hand-side to get started.


Starting on the left of the picture. Kamon is a library which uses AspectJ to hook into methods calls
made by the ActorSystem and record events of different types. The Kamon docs have some big gaps,
but you can get a feeling of what is possible. I will not make any special configuration and just use the
defaults to get started as fast as possible.

StatsD – Graphite

A network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services.

Kamon provides also other backends (datadog, newrelic) to report to. For this tutorial we stick with the free
StatsD server and Graphite as Backend Service.


Grafana is a frontend for displaying your stats logged to Graphite. You have a nice Demo you can play
around with. However I will give a detailed instruction on how to add your metrics in our Grafana dashboard.

Getting started

First we need an application we can monitor. I’m using my akka-kamon-activator. Checkout the code:

git clone

The application contains two message generators: one for peaks and one for constant load. Two types of
actors handle these messages. One creates random numbers and the child actors calculate the prime factors.

Kamon Dependencies and sbt-aspectj

First we add the kamon dependencies via

val kamonVersion = "0.3.4"
libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % "2.3.5",
  "io.kamon" %% "kamon-core" % kamonVersion,
  "io.kamon" %% "kamon-statsd" % kamonVersion,
  "io.kamon" %% "kamon-log-reporter" % kamonVersion,
  "io.kamon" %% "kamon-system-metrics" % kamonVersion,
  "org.aspectj" % "aspectjweaver" % "1.8.1"

Next we configure the sbt-aspectj-plugin to weave our code at compile time.
First add the plugin to your plugins.sbt

addSbtPlugin("com.typesafe.sbt" % "sbt-aspectj" % "0.9.4")

And now we configure it

javaOptions <++= AspectjKeys.weaverOptions in Aspectj
// when you call "sbt run" aspectj weaving kicks in
fork in run := true

Last step is to configure what should be recorded. Open up your application.conf where your akka configuration resides. Kamon uses the kamon configuration key.

kamon {
  # What should be recorder
  metrics {
    filters = [
        # actors we should be monitored
        actor {
          includes = [ "user/*", "user/worker-*" ] # a list of what should be included
          excludes = [ "system/*" ]                # a list of what should be excluded
      # not sure about this yet. Looks important
        trace {
          includes = [ "*" ]
          excludes = []
  # ~~~~~~ StatsD configuration ~~~~~~~~~~~~~~~~~~~~~~~~
  statsd {
    # Hostname and port in which your StatsD is running. Remember that StatsD packets are sent using UDP and
    # setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere.
    hostname = ""
    port = 8125
    # Interval between metrics data flushes to StatsD. It's value must be equal or greater than the
    # kamon.metrics.tick-interval setting.
    flush-interval = 1 second
    # Max packet size for UDP metrics data sent to StatsD.
    max-packet-size = 1024 bytes
    # Subscription patterns used to select which metrics will be pushed to StatsD. Note that first, metrics
    # collection for your desired entities must be activated under the kamon.metrics.filters settings.
    includes {
      actor       = [ "*" ]
      trace       = [ "*" ]
      dispatcher  = [ "*" ]
    simple-metric-key-generator {
      # Application prefix for all metrics pushed to StatsD. The default namespacing scheme for metrics follows
      # this pattern:
      application = "yourapp"

Our app is ready to run. But first, we deploy our monitoring backend.

Monitoring Backend

As we saw in the first picture, we need a lot of stuff running in order to store our log events. The libraries and components used are most likely reasonable and you (or the more Ops than Dev guy) will have to configure it. But for the moment we just fire them up all at once in a simple docker container. I don’t put them in detached mode so I see what’s going on.

docker run -v /etc/localtime:/etc/localtime:ro -p 80:80 -p 8125:8125/udp -p 8126:8126 -p 8083:8083 -p 8086:8086 -p 8084:8084 --name kamon-grafana-dashboard muuki88/grafana_graphite:latest

My image is based on a fork from the original docker image by kamon.

Run and build the Dashboard

Now go to your running Grafana instance at localhost. You see a default, which we will use to display
the average time-in-mailbox. Click on the title of the graph ( First Graph (click title to edit ). Now select the metrics like this:


And that’s it!

Akka Cluster with Docker containers


This article will show you how to build docker images that contain a single akka cluster application. You will be able to run multiple seed nodes and multiple cluster nodes. The code can be found on Github and will be available as a Typesafe Activator.

If you don’t know docker or akka

Docker is the new shiny star in the devops world. It lets you easily deploy images to any OS running docker, while providing an isolated environment for the applications running inside the container image.

Akka is a framework to build concurrent, resilient, distributed and scalable software systems. The cluster feature lets you distribute your Actors across multiple machines to achieve load balancing, fail-over and the ability to scale up and out.

The big picture

This is what the running application will look like. No matter where your docker containers will run at the end of the day. The numbers at the top left describe the starting order of the containers.


First you have to start your seed nodes, which will “glue” the cluster together. After the first node is started all following seed-nodes have to know the ip address of the initial seed node in order to build up a single cluster. The approach describe in this article is very simple, but easily configurable so you can use it with other provision technologies like chef, puppet or zookeeper.

All following nodes that get started need at least one seed-node-ip in order to join the cluster.

The application configuration

We will deploy a small akka application which only logs cluster events. The entrypoint is fairly simple:

object Main extends App {
  val nodeConfig = NodeConfig parse args
  // If a config could be parsed - start the system
  nodeConfig map { c =>
    val system = ActorSystem(c.clusterName, c.config)
    // Register a monitor actor for demo purposes
    system.actorOf(Props[MonitorActor], "cluster-monitor")
    system.log info s"ActorSystem ${} started successfully"

The tricky part is the configuration. First the akka.remote.netty.tcp.hostname configuration needs to be set to the docker ip address. The port configuration is unimportant as we have unique ip address thanks  to docker. You can read more about docker networking here. Second the seed nodes should add themselves to the akka.cluster.seed-nodes list. And at last everything should be configurable through system properties and environment variables. Thanks to the Typesafe Config Library this is achievable (even with some sweat and tears).

  1. Generate a small commandline parser with scopt and the following two parameters:
    –seed flag which determines if  this node starting should act as a seed node
    ([ip]:[port])… unbounded list of [ip]:[port] which represent the seed nodes
  2. Split the configuration in three files
    1. application.conf which contains the common configuration
    2. node.cluster.conf contains only  the node specific configuration
    3. node.seed.conf contains only the seed-node specific configuration
  3. A class NodeConfig which orchestrates all settings and cli parameters in the right order and builds a Typesafe Config object.

Take a closer look at the NodeConfig  class. The core part is this

// seed nodes as generated string from cli
(ConfigFactory parseString seedNodesString)
  // the hostname
  .withValue("clustering.ip", ipValue)
  // node.cluster.conf or node.seed.conf
  .withFallback(ConfigFactory parseResources configPath) 
  // default ConfigFactory.load but unresolved
  // try to resolve all placeholders (clustering.ip and clustering.port)

The part to resolve the IP address is a bit hacky, but should work in default docker environments. First the eth0 interfaces is searched and then the first isSiteLocalAddress is being returned. IP adresses in the following ranges are, ,,

The main cluster configuration is done inside the clustering section of the application.conf

clustering {
  # ip = "" # will be set from the outside or automatically
  port = 2551 = "application"

The ip adress will be filled by the algorithm describe above if nothing else is set. You can easily override all settings with system properties.
E.g if you want to run a seed node and a cluster node inside your IDE without docker start both like this:

# the seed node
-Dclustering.port=2551 -Dclustering.ip= --seed
# the cluster node
-Dclustering.port=2552 -Dclustering.ip=

For sbt this looks like this

# the seed node
sbt runSeed
# the cluster node
sbt runNode

The build

Next we build our docker image. The sbt-native-packager plugin recently added experimental docker support, so we only need to  configure our build to be docker-ready. First add the plugin to your plugins.sbt.

addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "0.7.4")

Now we add a few required settings to our build.sbt. You should use sbt 0.13.5 or higher.

// adds start script and jar mappings
// the docker maintainer. You could scope this to "in Docker"
maintainer := "Nepomuk Seiler"
// Short package description
packageSummary := s"Akka ${version.value} Server"

And now we are set. Start sbt and run docker:publishLocal and a docker image will be created for you. The Dockerfile is in target/docker if you want to take a closer look what’s created.

Running the cluster

Now it’s time to run our containers. The image name is by default name:version. For the our activator it’s akka-docker:2.3.4. The seed ip adresses may vary. You can read it out of the console output of your seed nodes.

docker run -i -t -p 2551:2551 akka-docker:2.3.4 --seed
docker run -i -t -p 2551:2551 akka-docker:2.3.4 --seed
docker run -i -t -p 2551:2551 akka-docker:2.3.4
docker run -i -t -p 2551:2551 akka-docker:2.3.4

What about linking?

This blog entry describes a different approach to build an akka cluster with docker. I used some of the ideas, but the basic concept is build ontop of linking the docker contains. This allows you to get the ip and port information of the running seed nodes. While this is approach is suitable for single host machines, it seems to get more messy when working with multiple docker machines.

The setup in this blog requires only one thing: A central way of assigning host ips. If your seed nodes don’t change their IP adresses you can basically configure almost everything already in your application.conf.

How to add a maven-plugin jar as dependency to sbt

I want to use the jdeb library to integrate in one of my own libraries.
Since it is a maven-plugin it’s packaged as a maven plugin. SBT
does not resolve the jars if you just add it as a dependency. This
will do the trick:

"org.vafer" % "jdeb" % "1.2" artifacts (Artifact("jdeb", "jar", "jar"))

This one is for sbt 0.13.5!

Open Source Projects – Between accepting and rejecting pull request

Lately I have done a lot work for the sbt-native-packager project. Being a commiter comes
with a lot of responsibilities. You are responsible for the code quality, supporting your community,
encouraging people to contribute to your project and of course providing an awesome open
source product.

Most of the open source commiters will probably start out as a contributor by providing pull
requests fixing bugs or adding new features. From this side it looks rather simple, the
project maintainer probably knows his/her domain and the code well enough to make
a good judgement. Right?

This is not always the case. The bigger the projects get, the smaller the chance gets
one contributor alone can merge your pull requests. However there’s a lot you can do
to make things easier! I’m really glad a lot of contributors already do a lot of these things,
but I wanted to write down my experience.

Provide tests

This is obvious, right? However tests are so much more then just proving it works or
proving it’s fixed. Tests are like documenation for the maintainers. They can see how
the new features works or what caused the bug. Furthermore it gives the maintainer
confidence to work on this feature/bug fix himself as there’s already a test which
checks his work.

Provide documentation

If you add a new feature then add a minimal documentation. A few sentence what does this,
how can I use it and why should I use it are enough. It makes life a lot easier for maintainers
judging your pull request, because they can try it out very easily themselves without going
through all of your code at first.

Be ready for changes

To maintain a healthy code base with a lot of contributors is a challenge. So if you decide
to contribute to an open source project try to stick to the style which is already applied in
the repository. This applies to the high abstraction level to the deep bottom of low level code.
And if you don’t then be prepared to change your code as the maintainers have to make sure
the code can be easily understood by everybody else. Sometimes it’s hard not to take this
personally and we try to be very polite. However sometimes corrections are necessary.

There’s an easy way to avoid all of this…

Small commits, early pull requests

Start small and ask early. Write comments in your code, use the awesome tooling most of
the code hosting sites provide like discussions or in-code-comments. Providing a base for
discussions is IMHO the best way to get things done. You can discuss what’s good and
bad, if the approach is correct or not. You avoid a lot work, which might  not be useful
or out of scope and the maintainers don’t have to feel bad about rejecting a lot of work.

Tell us more!

A lot of open source projects where created for a specific need, but the nature of an
open source project leads sometimes to an extension of this specific need and you
add more features. Tell us what you do with it! The maintainers (hopefully) love there
project and are amazed by the things you can do with it. Write blog posts, tweets
or stackoverflow discussions to show your case.

Playframework and RequireJS

RequireJS Logo

As a backend developer I like tools that help me to structure my code. Doing more and more frontend stuff I finally got time to learn some of the basics of RequireJS. Unfortunately the tutorials how to compose playframework and requireJS with multipage applications is not too big. There’s is some with AngularJS, but I didn’t want to port my applications to two new systems.

Application structure

For a sample application, I implemented to pages:

Both will have their own data-entry-point and dependencies. The index page looks like this

@(message: String)
@main("RequireJS with Play") {
    // html here
 @helper.requireJs(core ="javascripts/require.js").url,
                   module ="javascripts/main/main").url)

The routes file is very basic, too:

GET    /                controllers.Application.index
GET    /dashboard       controllers.Application.dashboard
POST   /api/sample      controllers.Application.sample
### Additions needed
GET    /jsroutes.js     controllers.Application.jsRoutes()
### Enable based resources to be returned
GET    /webjars/*file
GET    /assets/*file"/public", file)

The javascript folder layout

  • assets/javascripts
    • common.js
    • main.js
    • dashboard
      • chart.js
      • main.js
    • lib
      • math.js

How does it work?

First you define a file common.js, which is used to configure requirejs.

(function(requirejs) {
    "use strict";
        baseUrl : "/assets/javascripts",
        shim : {
            "jquery" : {
                exports : "$"
            "jsRoutes" : {
                exports : "jsRoutes"
        paths : {
            "math" : "lib/math",
            // Map the dependencies to CDNs or WebJars directly
            "_" : "//",
            "jquery" : "//localhost:9000/webjars/jquery/2.0.3/jquery.min",
            "bootstrap" : "//",
            "jsRoutes" : "//localhost:9000/jsroutes"
        // A WebJars URL would look like
        // //server:port/webjars/angularjs/1.0.7/angular.min
    requirejs.onError = function(err) {

The baseUrl is important, as this will be the root path from now on. IMHO this makes things easier than, relative paths.

The shim configuration is used to export your jsRoutes, which is defined in my Application.scala file. Of course you can add as many as you want.

The paths section is a bit tricky. Currently it seems there’s no better way than hardcoding the urls, like “jsRoutes” : “//localhost:9000/jsroutes”, when you use WebJars.

Define and Require

Ordering is crucial! For my /dasbhoard page the /dasbhoard/main.js is my entry point

// first load the configuration
require(["../common"], function(common) {
   console.log('Dashboard started');
   // Then load submodules. Remember the baseUrl is set:
   // Even you are in the dasboard folder you have to reference dashboard/chart
   // directly
   require(["jquery", "math", "dashboard/chart"], function($, math, chart){
       console.log("Title is : " + $('h1').text());
       console.log("1 + 3 = " + math.sum(1,3));
       chart.load({ page : 'dashboard'}, function(data){
       }, function(status, xhr, error) {

For the chart.js

// first the configuration, then other dependencies
define([ "../common", "jsRoutes" ], {
    load : function(data, onSuccess, onFail) {
        var r = jsRoutes.controllers.Application.sample();
        r.contentType = 'application/json'; = JSON.stringify(data);


Future Composition with Scala and Akka

Future Composition with Scala and Akka

Scala is functional and object-oriented language, which runs on the JVM. For concurrent and/or parallel programming it is a suitable choice along with the Akka framework, which provides a rich toolset for all kind of concurrent tasks. In this post I want to show a little example how to schedule a logfile-search job on multiple files/servers with Futures and Actors.


I created my setup with the Typesafe Activator Hello-Akka template. This results in a build.sbt file with the following content:

name := """hello-akka"""
version := "1.0"
scalaVersion := "2.10.2"
libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % "2.2.0",
  "com.typesafe.akka" %% "akka-testkit" % "2.2.0",
  "" % "guava" % "14.0.1",
  "org.scalatest" % "scalatest_2.10" % "1.9.1" % "test",
  "junit" % "junit" % "4.11" % "test",
  "com.novocode" % "junit-interface" % "0.7" % "test->default"
testOptions += Tests.Argument(TestFrameworks.JUnit, "-v")

Scala build-in Futures

Scala has already a build-in support for Futures. The implementation is based on java.util.concurrent. Let’s implement a Future which runs our log search.

import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits._
object LogSearch extends App {
println("Starting log search")
val searchFuture = future {
  Thread sleep 1000
  "Found something"
println("Blocking for results")
  val result = Await result (searchFuture, 5 seconds)
  println(s"Found $result")

This is all we need to run our task in another thread. The implicit import from ExecutionContext provides a default ExecutionContext which handles the threads the future is running on. After creating the future we wait with a blocking call Await result for our results. So far nothing too fancy.

Future composition

There are a lot of examples where the for-yield syntax is used to compose future results. In our case we have a dynamic list of futures: the log search results from each server.

For testing future capabilities we will create a list of futures from a list of ints which represent the time the task will run. Types are just for clarification.

val tasks = List(3000, 1200, 1800, 600, 250, 1000, 1100, 8000, 550)
val taskFutures: List[Future[String]] = tasks map { ms =>
  future {
    Thread sleep ms
    s"Task with $ms ms"

In the end, we want a List[String] as a result. This is done with the Futures companion object.

val searchFuture: Future[List[String]] = Future sequence taskFutures

And finally we can wait for our results with

val result = Await result (searchFuture, 2 seconds)

However this will throw a TimeoutException, as some of our tasks run more than 2 seconds. Of course we could increase the timeout, but there error could always happen again, when a server is down. Another approach would be to handle the exception and return an error. However all other results would be lost.

Future – Timeout fallback

No problem, we generate a fallback, which will return a default value if the operation takes, too long. A very naive implementation for our fallback could look like this

def fallback[A](default: A, timeout: Duration): Future[A] = future {
  Thread sleep timeout.toMillis

The fallback future will return after the executing thread has sleeped for the timeout duration. The calling code now looks like this.

val timeout = 2 seconds
val tasks = List(3000, 1200, 1800, 600, 250, 1000, 1100, 8000, 550)
val taskFutures: List[Future[String]] = tasks map { ms =>
val search = future {
  Thread sleep ms
  s"Task with $ms ms"
Future firstCompletedOf Seq(search,
  fallback(s"timeout $ms", timeout))
val searchFuture: Future[List[String]] = Future sequence taskFutures
println("Blocking for results")
val result = Await result (searchFuture, timeout * tasks.length)
println(s"Found $result")

The important call here is Future firstCompletedOf Seq(..) which produces a future returning the result of the first finished future.

This implementation is very bad as discussed here. In short: We are wasting CPU time by putting threads to sleep. Also the blocking call timeout is more or less a guess. With a one-thread scheduler it can actually take more time.

Futures and Akka

Now let’s do this more performant and more robust. Our main goal is to get rid of the poor fallback implementation, which was blocking a complete thread. The idea is now to schedule the fallback feature after a given duration. By this you have all threads working on real, while the fallback future execution time is almost zero. Java has a ScheduledExecutorService on it’s own or you can use a different implementation, a HashedWheelTimer, by Netty. Akka used to use the HashWheelTimer, but has now a own implementation.

So let’s start with the actor.

import akka.pattern.{ after, ask, pipe }
import akka.util.Timeout
class LogSearchActor extends Actor {
  def receive = {
    case Search(worktimes, timeout) =>
      // Doing all the work in one actor using futures
      val searchFutures = worktimes map { worktime =>
      val searchFuture = search(worktime)
      val fallback = after(timeout, context.system.scheduler) {
          Future successful s"$worktime ms > $timeout" 
        Future firstCompletedOf Seq(searchFuture, fallback)
      // Pipe future results to sender
      (Future sequence searchFutures) pipeTo sender
  def search(worktime: Int): Future[String] = future {
      Thread sleep worktime
      s"found something in $worktime ms"
case class Search(worktime: List[Int], timeout: FiniteDuration)

The important part is the after method call. You give it a duration after which the future should be executed and as a second parameter the scheduler, which is the default one of the actor system in our case. The third parameter is the future which should get executed. I use the Future success companion method to return a single string.

The rest of the code is almost identical. PipeTo is a akka pattern to return results of a future to the sender. Nothing fancy here.

Now how to call all this. First the code

object LogSearch extends App {
println("Starting actor system")
val system = ActorSystem("futures")
println("Starting log search")
try {
  // timeout for each search task
  val fallbackTimeout = 2 seconds
  // timeout use with akka.patterns.ask
  implicit val timeout = new Timeout(5 seconds)
  require(fallbackTimeout < timeout.duration) 
  // Create SearchActor 
  val search = system.actorOf(Props[LogSearchActor]) 
  // Test worktimes for search 
  val worktimes = List(1000, 1500, 1200, 800, 2000, 600, 3500, 8000, 250) 
  // Asking for results 
  val futureResults = (search ? Search(worktimes, fallbackTimeout)) 
    // Cast to correct type 
    // In case something went wrong 
    .recover { 
       case e: TimeoutException => List("timeout")
       case e: Exception => List(e getMessage)
  // Callback (non-blocking)
  .onComplete {
      case Success(results) =>
         println(":: Results ::")
         results foreach (r => println(s" $r"))
         system shutdown ()
      case Failure(t) =>
         t printStackTrace ()
      system shutdown ()
} catch {
  case t: Throwable =>
  t printStackTrace ()
  system shutdown ()
  // Await end of programm
  system awaitTermination (20 seconds)

The comments should explain most of the parts. This example is completly asynchronous and works with callbacks. Of course you can use the Await result call as before.



There are always some methods, classes or helpers you don’t find in your programming language, which would be useful to save boilerplate code or needed a lot in your programme. Now you are confronted to decided whether you implement this particular function yourself ( do-it-yourself ) or use a third party library ( don’t-repeat-yourself). I sometimes have awesome discussions with my boss, which approach we should use. The following is a comparison of both approaches with pros and coins.

DRY | Don’t-Repeat-Yourself

The main question is, why should I invent the wheel again. There are a lot of good libraries out there like Google Guava, Apache Commons and many others for more specialized use cases. Before taking a closer look to the library, we consider the following points

  • When was the last source code update? Is the libary still maintained?
  • Are there documentations and examples how to use the library
  • Is there any kind of community

This part of a list described on Java Code Geeks. If we cannot satisfy at least 2 of the 3 points, we won’t choose the library and look for a different one. Now we would take a closer look at our library, read tutorials and the documentation and use the one function we where missing in our current library/language set. Often we do this by writing tests assuring the library does what we want it to do. This is crucial as if you update the library all your functions will be tested. Joda Datetime for example implements some RFC standards, which is pretty awesome. However you have to read the RFC to know exactly what is going on or you just test the methods you need.

Lets summairze the benefits from using a third party library

  • Less development time
  • Pretested functionality
  • Less maintainance support

DIY | Do-It-Yourself

Sometimes you just need this one little method. However it shippes with a big library with features you never even heard of. This library satisfies every standard you set, but it is too big (no matter if you mean size, features or maintaince overhead). Even if the library isn’t too big, maybe you just don’t like it in case of API or code style. So you start to write it yourself and it will be part of your product.


Task DRY DIY Description
Development Learn the API Write API The more complex the library and the worse the documentation the harder it's to learn an API than to write your own
Tests Test library API you use Test your own Implementation Always need them
Maintainance Community Support Alone When you use an 3. party open source library you should give something back
Happiness a bit even more Developers are lazy, but they don't want to read API docs. Writing code on their own makes them even more happy *sigh*

Leistungsschutzrecht – Phantasien

Das Leistungsschutzrecht hat vorerst den Bundestag passiert. Wenn auch die aktuelle Version zu Gunsten der Internetgemeinde abgeschwächt wurde, bleibt das ganze Vorhaben immer noch mehr als fraglich. Mögliche Folgen, wahrscheinlich wie unwahrscheinlich möchte ich deshalb darstellen.

Alles aus

Die Auslegung von “kurzer Textausschnitt” wird so restriktiv, dass keine sinnvolle Anzeige mehr möglich ist.

Google wird versuchen den Verlagen, die bereits bestehende technische Möglichkeit erklären, wie ihre Seiten nicht indizierbar gemacht werden und so nicht in den Suchmaschinen auftauchen. Die Verlage lehnen diese Möglichkeit ab, weil so kein Geld zu verdienen ist. Daraufhin wird eine schwarze Liste nach belgischem Vorbild für alle großen Suchmaschinenanbieter angelegt, die verhindert, dass Seiten von Verlagen gescannt werden.

An den vielen gerichtlichen Verhandlungen zwischen Suchmaschinenbetreiber und Verlagen wegen Wettbewerbsverzerrung verdienen nur die Anwälte. Am Ende bleibt ein dieser Link kann in ihrem Land nicht angezeigt werden. Deutsche Leser finden ihre Nachrichten auf ausländischen Portalen und Verlage jammern, dass sie mehr Geld den je in Suchmaschine Werbung und Optimierung stecken müssen.

Die deutsche und damit zweitgrößte Wikipedia würde für unbestimmte Zeit unter Wartungsarbeiten stehen, weil alle Artikel nach vermeintlich widerrechtlich benutzen Zitaten, Links und Textausschnitten durchsucht werden muss.

Facebook, Twitter, Google+ und Co werden in Deutschland entweder

- kostenpflichtig, so dass eine pauschale an Verlage gezahlt wird, um jedem Nutzer uneingeschränktes teilen, linken, retweeten ermöglicht wird.
- oder ihre AGBs derart erweitern, dass User selbst haften für geteilte Inhalte und Verlage, die die Plattform nutzen damit einverstanden sind, dass ihre Inhalte geteilt werden.

Apps und Programme wie Thunderbird, Google Currents oder einfache RSS Reader werden kriminalisiert und illegal, weil sie die Inhalte von Verlagswebsiten anzeigen. Kurz bevor gegen Browser an sich gehetzt wird, bemerken auch viele Verlage wie lächerlich diese Kampagnen sind, angesicht drastisch sinkender Leser im Onlinebereich, die nicht durch Leser im Printsektor ausgeglichen werden.


Die Auslegung von kurzer Textausschnitt reicht aus, so dass Newsaggregatoren, wie Suchmaschinen weiterhin wie gehabt arbeiten können. Das Teilen in sozialen Netzwerken bleibt dadurch weiterhin unbehindert, mit Ausnahme von kritischen Stimmen. Hier wird das Leistungsschutzrecht genutzt, um diese Stimmen soweit es geht zu unterdrücken. ACTA lässt grüßen.


Das Gesetz wird abgelehnt und stattdessen, wie in Frankreich ein Deal zwischen Suchmaschinenbetreibern und Verlagen ausgehandelt. Es ist vieles anders und nicht leicht, aber neue Geschäftsmodelle sind auch in anderen Branchen entstanden, die dadurch langsam wieder in Schwung kommen.

Zebracar, Drive Now und Flinkster rechnen die Autonutzung im Minuten und Kilometertakt ab. Kann ich mir nicht auch meine Zeitung für eine Stunde Mittagspause mieten?

Adobe Photoshop, Musik oder Videos kann ich mir im Abo holen ( ja, Kultur-Flatrate, nichts anderes ist die GEZ Pauschale) oder pay-per-use. Warum nicht auch Zeitungsartikel?

Meinen Handytarif kann ich mir individuell zusammen klicken. Warum nicht meine Zeitung? Technisch zu aufwändig oder ist der Quartalsdruck zu hoch.

Es wiederholt sich alles. Musik, Film, jetzt die Verlagsbranche. Das Internet macht viele alte Geschäftsmodelle obsolet. Diese aber mit Gesetzen, und vor allem Insellösungen zu schützen, ist kontraproduktiv.

Maven Reports in Jenkins

Code quality is a sensitive topic. It affects your maintenance cost as well as your customer satisfaction. Not to mention your developers motivation to work with the code. Who wants to fix ugly code, right?

Discussing code quality always needs hard facts and numbers! So this is a short tutorial how to create some simple reports to analyze some code quality metrics.


This section will shorty explain the used reports.


FindBugs looks for bugs in Java programs. It is based on the concept of bug patterns. A bug pattern is a code idiom that is often an error

FindBugs Analysis

FindBugs Analysis


Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard. It automates the process of checking Java code to spare humans of this boring (but important) task. This makes it ideal for projects that want to enforce a coding standard.

Checkstyle Analysis

Checkstyle Analysis

Cobertura Code Coverage

Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage. It is based on jcoverage.

Cobertura Report

Cobertura Report

Surefire Test Report

The Surefire Plugin is used during the test phase of the build lifecycle to execute the unit tests of an application. It generates reports…

Surefire Testreport

Surefire Testreport

Basic pom.xml

Starting with a basic pom configuration:



Jenkins Plugins

You need to install a few jenkins plugins to get a nice integration with your reports.

Project Configuration

Now you need to configure your project to show the results of your reports.

Findbugs and Checkstyle

FindBugs and Checkstyle

FindBugs and Checkstyle


You can configure them in the “build configuration” tab. There are some limits to set, which influence the representation.


Cobertura Config

Cobertura Config


Cobertura is configured in the “post-build actions”. Same configurations as in the findbugs and checkstyle plugin.


On your main page of your project you have some new graphs and links.

Jenkins Trend Graphs

Jenkins Trend Graphs

Jenkins Navbar

Jenkins Navbar


MySql Timezones in the Cloud

On a small university project I found my self developing a web application with play, mysql and some javascript libraries. After testing and developing on my local machine I want to deploy my application.

Have heard of Openshift? It’s an amazing PaaS product by RedHat. It’s currently in Developer Preview and you can test it for free. To deploy it, follow  this amazing good tutorial.

What happend to my 24/7 chart?

The correct visualization

The correct visualization

The incorrect visualization

The incorrect visualization


Openshift uses Amazones EC2 service. In particular the servers are located in the US-East region. But that should be too hard to change, or?

  1. Install PhpMyAdmin cartrige
  2. Log into your app with ssh and import time zone tables to mysql
    mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u admin -p mysql
  3. Login as admin in PhpMyAdmin
  4. Set global timezone with
    -- Set correct time zone
    SET GLOBAL time_zone = 'Europe/Berlin';
    -- check if time zone is correctly set
    SELECT version( ) , @@time_zone , @@system_time_zone , NOW( ) , UTC_TIMESTAMP( );

Happy timezone :)