Traits for Groovy/Java

25. June, 2009

I’m again toying with the idea of traits for Java (or rather Groovy). Just to give you a rough idea if you haven’t heard about this before, think of my Sensei application template:

class Knowledge {
    Set tags;
    Knowledge parent;
    List children;
    String name;
    String content;
}
class Tag { String name; }
class Relation { String name; Knowledge from, to;

A most simple model but it contains everything you can encounter in an application: Parent-child/tree structure, 1:N and N:M mappings. Now the idea is to have a way to build a UI and a DB mapping from this code. The idea of traits is to implement real properties in Java.

So instead of fields with primitive types, you have real objects to work with:

    assert "name" == Knowledge.name.getName()

These objects exist partially at the class and at the instance level. There is static information at the class level (the name of the property) and there is instance information (the current value). But it should be possible to add more information at both levels. So a DB mapper can add necessary translation information to the class level and a Hibernate mapper can build on top of that.

Oh, I hear you cry “annotations!” But annotations can suck, too. You can’t have smart defaults with annotations. For example, you can’t say “I want all fields called ‘timestamp’ to be mapped with an java.sql.Timestamp“. You have to add the annotation to each timestamp field. That violates DRY. It quickly gets really bad when you have to do this for several mappers: Database, Hibernate, JPA, the UI, Swing, SWT, GWT. Suddenly, each property would need 10+ annotations!

I think I’ve found a solution which should need relatively few lines of code with Groovy. I’ll let that stew for a couple of days in by subconscious and post another article when it’s well done 🙂


Jazoon, Day 2: XWiki

24. June, 2009

I’m a huge fan of wikis. Not necessarily MediaWiki (holy ugly, Batman, the syntax!). I dig MoinMoin. I just heard the talk by Vincent Massol about next generation wikis. Okay, I can hear you moan under the load of buzzwords but give me a moment. XWiki looks really promising.

Wikis basically allow to publish mostly unstructured data. I say “mostly” because wikis give them some structure but not (too) much: You can organize it in pages and put it into context (by linking between the pages). Often, this is enough. But recently, MediaWiki has started to add support for structured data. See that infobox in the top left corner? But that’s just half-hearted.

XWiki takes this one step further. XWiki, as I understand it, is a framework of loosely coupled components which allow you to create a wiki. The default one is pretty good, too, so most of the time, you won’t even get into this. The cool part about XWiki is that you can define a class inside of it. Let me repeat: You can create a page (like a normal text page) that XWiki will treat as a class definition. So this class gets versioned, etc. You can then add attributes as you like.

After that, you can create instances of this class. The instances are again wiki pages. You can even use more than a single instance on a page, for example, you can have several tag instances and a single person instance. Instances are versioned, too. Of course they are, this is a wiki!

Now you need to display that data. You can use Velocity or Groovy for that. And guess what, the view is … a wiki page. So your designers can create a beautiful look for your the boring raw data. With versions and comments and everything. While some other guys are adding data to the system.

In “normal” wiki pages, you can reference these instances and render them using such a template. The same is true for editors. With a few lines of code, you can create overview pages: All instances of a class or all instances with or without a certain property or you can use Groovy to do whatever you can think of.

Now imagine this: You have an existing database where your marketing guys can, say, plan the next campaign. They can use all the wiki features to collect ideas, filter and verify them, to come up with a really good plan. Some of that data needs to go into a corporate database. In former wikis, you’d have to use an external application, switch back and forth, curse a lot when they get out of sync.

With XWiki, you can finally annotate data in your corporate database with a wiki page, with all the power of a wiki and you can even display the data set in the wiki and edit it there. Granted, the data set won’t be versioned unless your corporate database allows that but it’s simple to do the versioning in the data access layer (for example, you can save all modifications in a log database).

Suddenly, possibilities open up.


Precompiling Custom JARs

24. June, 2009

Java 5 precompiles the rt.jar to a file with the JIT when you start it the first time. This is mostly to improve startup times; it takes only a few moments and is much more efficient than running the JIT when a class has been used more than N times. Next time the class is requested, the VM skips the bytecode loading and directly pulls in the precompiled binary which is already optimized for your CPU.

My idea is to open this process for custom JARs. Any big Java app loads heaps of external JARs and that takes time – often a lot of time. JARs would need to supply a special META-INF file which contains a UUID or a checksum from which the VM can conclude whether the JAR has changed or not.

The first time the JAR shows up in the classpath, the JIT precompiler would convert it, save the result in the cache. Next time, the META-INF file would be read, and the bytecode would be ignored. I’ll open a enhancement request in OpenJDK 7. If you like the idea, please support it.


Jazoon 2009, Day One

23. June, 2009

It’s late, so only a very short summary of today.

James Gosling gave a broad overview what is currently happening at Sun. Nice video but little meat. I asked about what happened to closures but I’m not sure whether I can repeat his answer here. My feeling is that, behind the scenes, there’s a lot of emotions and that’s bad. Oh well, maybe some will simply implement something reasonable in the OpenJDK. Otherwise, Closures are probably dead in Java which is a bit of a pity.

Dirk König showed a the most common use cases for Groovy in a Java project. Nothing really new for anyone who had been in contact with Groovy for nicely packaged and showed some cool stuff you can do with this mature language.

After that, Neal Ford explained how Design Patterns started to disappear. We didn’t really notice but things like Iterators or Adapters have become features of the language itself. In Java, you still have to query a container for an iterator, in Groovy, you just container.each { … do something with each item … }. Really nice talk, as usual.

Missed most of the next talk because I talked to Dirk König but if you’re using Maven or Ant as a build system, you should have a look at Gradle. It fixes most of the issues with Ant and some of the ones with Maven. Later that day, Hans Dockter (a Gradle developer) and I tossed a couple of ideas back and forth how the build could be improved. If any of these could be implemented, we’ll see a new way to build software.

At 15:00, Jason van Zyl told us what is happening in and around Maven. His talk was so full of information, it was impossible to follow the slides and him. Maven 3.0 is due early 2010 and it will solve a lot of the issues in M2. One of the most important features: You get hooks to run stuff before and after a lifecycle phase. Ever wanted to calculate a build number? Now you can.

M3 is based on SAT4J, just like Eclipse p2. Now, if you followed my blog, you know that I hate p2. p2 is a piece of banana software, delivered green, ripes at the customer. Which is a pity. p2 solved a lot of the issues with the old installer and it could solve all the other issues but apparently, there are forced behind the scenes which make this almost impossible. So when you meet Pascal Rapicault next time, don’t blame him for all the misery he has caused you. He has to solve a mission impossible and that only works in movies.

Later that evening, Thomas Mueller talked about Testing Zen. Nothing really new but I’ll probably have a look at H2 next week or so. It could replace my favorite in-memory Java database HSQLDB.

The closing session was by Neal Ford again. I wish I could create slides that were only a fraction as great as his. *sigh* Anyway, he drew a large arc from how technologies can become obsolete within a few years (as we all know), how good intentions pave the road to hell, about our responsibilities as software developers which go beyond what’s in our contract and predicting the future. Well, Terminator is probably not a good example but everyone knows it. Still, I find it troubling that the military is deploying thousands of automated drones for surveillance. You don’t? How would you feel about a robot equiped with a working machine gun that is programmed to automatically fire on any human that isn’t wearing an RFID tag? Samsung installed a couple of them along the northern border of South Korea two years ago. Skynet, here we come!

What they don’t teach you about software at school: Be Smart!, the last talk of the day, was disappointing. I’ll give Ivar that he had to compete against Neal but … The topic was okay and what he said was correct and all but the presentation could use some improvement. ‘Nuff said.


Printing Big Stuff On Linux

22. June, 2009

During the weekend, I tried to print my family tree. It was pretty simple to generate with GenealogyJ but frankly, the family tree view sucks. I was constantly shifting my “root” node to be able to see the parts I was interested in, etc. And printing this sucks even more. GJ will use lots of paper, printing huge empty boxes with lots of space around them. If I reduced the size of the parts which I don’t need, the layout fell apart. In short, I needed something better.

Graphviz to the rescue. A small Python program (100 lines) read the GED file and turned it into a graph with the layout of the persons just the way I wanted them to be. dot thought about the graph for a few seconds until it emitted a nice SVG which I could then print. Or so I thought.

I opened Inkscape and clicked on print. Yeah … that looks like the document … or rather a small part of it. Where is the poster print option? Ah, there is none. Great. Export as PNG with … oh … 150 DPI. Starting GIMP. No poster printing either. lpr tries to scale the image by .114537 which results in a 87 MB file. You gotta be kidding! kprinter? Nope … unless the command poster can be found.

A word of warning: The poster for openSUSE 11.1 (which you get with zypper) is broken. I tried it with both PostScript and Encapsulated PostScript and both resulting files were unusable in GhostView, GhostScript and Okular. Use this one instead.

After installing psutils, I could rotate the file to fit better on the page, too.

Conclusion: Most programs on Linux create PostScript files but printing them is still something for the command line. The recent print dialogs all look somewhat similar (but are different in tiny, annoying ways), each version is missing some important detail and most of them fail to simply print an oversized image on a single piece of paper or spread it over several. That Inkscape spits out an empty page after the document is just a minor issue.

What’s worse: The print preview either doesn’t work or doesn’t exist, printing to file is not implemented either or isn’t persistent. In the whole process, I ruined roughly 50 sheets of paper. *sigh*

In the end, I had a process which involved six different programs (GenealogyJ, Python, graphviz, Inkscape, pstops, poster) just for this standard task. Printing on Linux has some way to go, yet. On the positive side, I could script all these tasks and I could do it.


Testing The Impossible: Inserting Into Database

5. June, 2009

Tests run slow when you need a database. An in-memory database like HSQLDB or Derby helps but at a cost: Your real database will accept some SQL which your test database won’t. So the question is: How can you write a performant test which uses the SQL of the real database?

My solution is to wrap the JDBC layer. Either use a mock JDBC interface like the one provided by mockrunner. Or write your own. With Java 5 and varargs, this is simple:

public int update (Connection conn, String sql, Object... params) throws SQLException {
    PreparedStatement stmt = null;
    try {
        stmt = conn.prepareStatement (sql);
        int i = 1;
        for (Object p: params) {
            stmt.setObject(i++, p);
        }
        return stmt.executeUpdate ();
    }
    finally {
        stmt.close ();
    }
}

Put all these methods into an object that you can pass around. In your tests, override this object with a mockup that simply collects the SQL strings and parameter arrays. You can even mix and match: By examining the SQL string, you can decide whether you want to run a query against the database or handle it internally.

This way, you can collect any newly created objects but still load some background data from the database (until you get bored and make the query methods return predefined results).

In the asserts, just collect all the results into a big String and compare them all at once.

Notes: The code above is a bit more complicated if you allow null values. In this case, you need to tell JDBC what the column type is. My solution is a NullParameter class which contains the type. If the loop encounters this class, then it calls setNull() instead of setObject().


%d bloggers like this: