Eclipse Workspace Mechanic

14. March, 2011

If you want to:

  • Create a consistent environment among groups
  • Save time setting up new workspaces
  • Make sure your favorite preferences are applied to all your current and future workspaces

then this is for you.

Related articles:


Building Eclipse from Git

16. February, 2011

Andrew Niefer blogs about Building Eclipse from Git. Unfortunately, he doesn’t explain how to do that if you’re not a committer (i.e. have a user on eclipse.org).

I’m still hoping that one day, it will be possible for people outside the Eclipse team, to be able to build Eclipse projects.


Getting MercurialEclipse 1.7.0

27. November, 2010

Wondering why Eclipse suddenly asks for a password for cbes.javaforge.com? Someone decided that it was a good idea to request users of MercurialEclipse to create accounts on JavaForge.

Not impressed? Go here instead.


Using Tycho to build Eclipse plugins

15. November, 2010

After my horrible time with PDE, I have Tycho a whirl today. I must say the whole experience was much more pleasant (despite the unfriendly Tycho home page at tycho.sonatype.org – don’t go there!).

As before, I tried to build BIRT. Unfortunately, I failed (but much faster and I know why): Tycho 0.10.0 can’t resolve extra JAR dependencies: TYCHO-533 Tycho should honor jars.extra.classpath

If you want to get started with Tycho, visit this page. There is an exemplary POM and lots of other bits and pieces.


Building patches for Eclipse

10. November, 2010

Frustrated by mysterious error messages from PDE? Overwhelmed by Buckminster?

If you need to apply a simple patch to a plug-in for Eclipse, there is a more simple way. Follow this recipe:

  1. Download the source for the plug-in
  2. Create a new project
  3. Add all plug-ins in the eclipse/plugins folder to the build path. Use the variable eclipse_home. This is most simple if you use an ANT build script. If you want to waste time, try to figure out which plugins you need and only add those.
  4. Extract the few Java source files that you need to modify and copy them into your project.
  5. Copy the JARs of plugins you want to patch into your project.
  6. Fix the bugs.
  7. Use a bit of ANT magic to replace the Java classes in the JARs you copied in step #5 with the fixed versions. The trick here is that most Eclipse JARs are signed. You’ll need to remove the cryptographic keys in order to be able to load the JARs. See below for a piece of code that does the trick.
  8. Add a rule to your build.xml to copy the fixed JARs back into the plugins folder of Eclipse.
  9. Exit Eclipse (or start another instance) to test your fixes.

Here is the source to strip the SHA1-Digest keys from a MANIFEST.MF file:

// Needs commons-io 1.4
public class FilterManifest {

    public static void main( String[] args ) {
        try {
            FilterManifest tool = new FilterManifest();
            tool.run( args );
        } catch( Exception e ) {
            e.printStackTrace();
            System.exit( 1 );
        }
    }

    private void run( String[] args ) throws Exception {
        File manifestFile = new File( args[0] );
        Manifest manifest = readManifest( manifestFile );

        manifest.getEntries().clear();

        File backup = new File( manifestFile.getAbsolutePath() + ".bak" );
        if(! manifestFile.renameTo( backup ) ) {
            throw new RuntimeException( "Can't backup file" );
        }

        save( manifest, manifestFile );
    }

    private void save( Manifest manifest, File manifestFile ) throws IOException {
        FileOutputStream stream = new FileOutputStream( manifestFile );
        try {
            manifest.write( stream );
        } finally {
            IOUtils.closeQuietly( stream );
        }
    }

    private Manifest readManifest( File manifestFile ) throws IOException {
        FileInputStream stream = new FileInputStream( manifestFile );
        try {
            return new Manifest( stream );
        } finally {
            IOUtils.closeQuietly( stream );
        }
    }

}

To use this code, use this ANT code:

    <target name="fix-org.eclipse.birt.engine" depends="init">
        <unjar src="plugins/org.eclipse.birt.report.engine_2.6.1.v20100915.jar" dest="tmp">
            <patternset>
                <include name="META-INF/MANIFEST.MF"/>
            </patternset>
        </unjar>
        <java classname="tools.FilterManifest">
            <arg file="tmp/META-INF/MANIFEST.MF"/>
            
            <classpath>
                <pathelement location="target-eclipse/classes" />
                <pathelement location="target/classes" />
                <pathelement location="${m2_repo}/commons-io/commons-io/1.4/commons-io-1.4.jar" />
            </classpath>
        </java>
        <jar destfile="tmp/org.eclipse.birt.report.engine_2.6.1.v20100915.jar"
            compress="true" update="true" duplicate="preserve" index="true"
            manifest="tmp/META-INF/MANIFEST.MF"
        >
            <fileset dir="target-eclipse/classes">
                <include name="org/eclipse/birt/report/engine/**/*" />
            </fileset>
            <zipfileset src="plugins/org.eclipse.birt.report.engine_2.6.1.v20100915.jar">
                <exclude name="META-INF/*"/>
            </zipfileset>
        </jar>
    </target>

The two <pathelement> elements are necessary to make the code work from Eclipse and command line Maven (I’m using different target directories for Eclipse and Maven).

The complex <jar> target allows to copy everything from the existing plugin JAR but the crypto info.


Using Eclipse to parse Java code

5. November, 2010

Eclipse comes with its own Java compiler. You can use this compiler to generate an AST from Java code by adding plugins/org.eclipse.jdt.core_<version>.jar to the classpath (details):

import java.io.*;
import java.util.LinkedHashSet;
import java.util.Map;
import java.util.Set;
import org.apache.log4j.Logger;
import org.eclipse.jdt.core.JavaCore;
import org.eclipse.jdt.core.dom.AST;
import org.eclipse.jdt.core.dom.ASTParser;
import org.eclipse.jdt.core.dom.CompilationUnit;

public class EclipseAstParser {

    public static final String VERSION_1_4 = "1.4";
    public static final String VERSION_1_5 = "1.5";
    public static final String VERSION_1_6 = "1.6";

    private static final Set<String> ALLOWED_TARGET_JDKS = new LinkedHashSet<String>();
    static {
        ALLOWED_TARGET_JDKS.add(VERSION_1_4);
        ALLOWED_TARGET_JDKS.add(VERSION_1_5);
        ALLOWED_TARGET_JDKS.add(VERSION_1_6);
    }

    private static final Logger log = Logger.getLogger(EclipseAstParser.class);
    public static boolean DEBUG;

    private String targetJdk = VERSION_1_4;
    private String encoding = "UTF-8";

    public void setTargetJdk( String targetJdk ) {
        if(!ALLOWED_TARGET_JDKS.contains(targetJdk))
            throw new IllegalArgumentException("Invalid value for targetJdk: [" + targetJdk + "]. Allowed are "+ALLOWED_TARGET_JDKS);

        this.targetJdk = targetJdk;
    }

    public void setEncoding( String encoding ) {
        if( encoding == null )
            throw new IllegalArgumentException("encoding is null");
        if( encoding.trim().length() == 0 )
            throw new IllegalArgumentException("encoding is empty");
        this.encoding = encoding;
    }

    public AstVisitor visitFile( File file ) throws IOException {
        if(!file.exists())
            new IllegalArgumentException("File "+file.getAbsolutePath()+" doesn't exist");

        String source = readFileToString( file, encoding );

        return visitString( source );
    }

    public static String readFileToString( File file, String encoding ) throws IOException {
        FileInputStream stream = new FileInputStream( file );
        String result = null;
        try {
            result = readInputStreamToString( stream, encoding );
        } finally {
            try {
                stream.close();
            } catch (IOException e) {
                // ignore
            }
        }
        return result;
    }

    public AstVisitor visit( InputStream stream, String encoding ) throws IOException {
        if( stream == null )
            throw new IllegalArgumentException("stream is null");
        if( encoding == null )
            throw new IllegalArgumentException("encoding is null");
        if( encoding.trim().length() == 0 )
            throw new IllegalArgumentException("encoding is empty");

        String source = readInputStreamToString( stream, encoding );

        return visitString( source );
    }

    public static String readInputStreamToString( InputStream stream, String encoding ) throws IOException {

        Reader r = new BufferedReader( new InputStreamReader( stream, encoding ), 16384 );
        StringBuilder result = new StringBuilder(16384);
        char[] buffer = new char[16384];

        int len;
        while((len = r.read( buffer, 0, buffer.length )) >= 0) {
            result.append(buffer, 0, len);
        }

        return result.toString();
    }

    public AstVisitor visitString( String source ) {
        ASTParser parser = ASTParser.newParser(AST.JLS3);

        @SuppressWarnings( "unchecked" )
        Map<String,String> options = JavaCore.getOptions();
        if(VERSION_1_5.equals(targetJdk))
            JavaCore.setComplianceOptions(JavaCore.VERSION_1_5, options);
        else if(VERSION_1_6.equals(targetJdk))
            JavaCore.setComplianceOptions(JavaCore.VERSION_1_6, options);
        else {
            if(!VERSION_1_4.equals(targetJdk)) {
                log.warn("Unknown targetJdk ["+targetJdk+"]. Using "+VERSION_1_4+" for parsing. Supported values are: "
                        + VERSION_1_4 + ", "
                        + VERSION_1_5 + ", "
                        + VERSION_1_6
                );
            }
            JavaCore.setComplianceOptions(JavaCore.VERSION_1_4, options);
        }
        parser.setCompilerOptions(options);

        parser.setResolveBindings(false);
        parser.setStatementsRecovery(false);
        parser.setBindingsRecovery(false);
        parser.setSource(source.toCharArray());
        parser.setIgnoreMethodBodies(false);

        CompilationUnit ast = (CompilationUnit) parser.createAST(null);

        // AstVisitor extends org.eclipse.jdt.core.dom.ASTVisitor
        AstVisitor visitor = new AstVisitor();
        visitor.DEBUG = DEBUG;
        ast.accept( visitor );

        return visitor;
    }
}

Eclipse Modeling Day

29. October, 2010
Figure 4-3: Data Modelling Today

Image via Wikipedia

Yesterday was Eclipse Modeling Day here in Zürich. There were a couple of talks from people who were using modeling for projects and talks from project leaders of modeling projects like EMF and CDO.

Eclipse Modeling Platform for Enterprise Modeling

If you’ve used the Eclipse modeling projects, you’ll know the pain: Where to start? Which project is worth to spend time with? Caveats? Things like that. It seems that’s not a superficial problem. Eclipse Modeling is a big, unsolved jigsaw puzzle. The new project “Eclipse Modeling Platform” sets out to close the major gaps in the next two years. On the road map are things like authentication, large scale models, comparing models, etc.

For me, the list of topics looked more like an MBA’s wish-list than something that will make life easier for software developers. Their standpoint was that the funders call the shots. My standpoint is that the we need tools to help us solve the basic issues like good editors for (meta-)models and a useful debugging framework for code generators.

Interesting projects: Sphinx and Papyrus.

User Story: Models as First Class Citizens in the Enterprise

Since many people didn’t seem to be aware that modeling can do, Robert Blust (UBS AG) showed an example. Like most banks, the UBS has tons of legacy code. And tons of rules. Rules like: Any application A must access data of another application B via a well-defined interface. Their product would collect a couple of gigabytes of data from old COBOL code and use that to determine dependencies (like the DB tables it uses).

The next step would be to define which tables belong to which application and the end result is an application which can show and track you rule violations. Or which can show a Java developer which tables he must care for if he has to replace an old COBOL application.

There was the question of authentication: Who can see what of the model? This is going to be some work to solve in a way that it’s still manageable. For example, a part of the model could be accessible via a roles-based model. A software developer should be able to see all the data which is relevant to his project. But what about bug reports? Should a reporter be allowed to see all of them? What about the security related ones?

If we go to fraud tracking, individual instances in the model might be visible to just a very few people. So authentication is something which needs to scale extremely well. It must be as coarse of fine-grained as needed, sometimes the whole range in a single model.

Eclipse Modeling Framework for Data Modeling

Ed Merks introduced EMF. Not much new here for me. I tried to talk to him during the coffee break but he was occupied by Benjamin Ginsberg. Benjamin was interested to get a first rough view on modeling. Apparently, I made some impression on him, because he came back later to see me.

Textual Modeling with Xtext

Sven Efftinge showed some magic using Xtext: He had his meta-model open in two editors, a textual and a graphical one. When he changed something in the graphical view, it would show up in the text editor after save. Nice. I couldn’t ask him how much code it took to implement this.

Under the hood, Xtext uses Guice for dependency injection.

Graphical Modeling with Graphiti

Michael Wenz from SAP showcased Graphiti. It’s a graphical editor framework for models like GMF but I guess there is a reason why SAP invented the wheel again. Several people at the event mentioned GMF unfavorably. I’m not sure why that is but I remember that EMF generated huge, non-reusable blobs of Java code when I asked it to generate an editor for my models. Ed wasn’t exactly excited when I asked to change that.

Graphiti itself looks really promising. The current 0.8 is pretty stable and has a graphical editor for JPA models which allow to define relations between instances via drag-and-drop. No more wondering which side is the “opposite.” It also creates all the fields, gives them the right types, etc. From the back of the room, it looked like a great time-saver.

User Story: The Usage of Models in an Embedded Automotive IDE

A guy from Bosch showed some real-life problems with modeling, especially with performance. They have huge models. Since they didn’t look at CDO, their editors had to load the whole model into RAM. Since Java can only allocate 1.5GB of RAM on a 32-bit hardware, they are at the limit of what they can handle (some projects have 400MB sources).

It’s a good example how an existing technology could have made their lives easier if they only knew about it. Or maybe EMF is too simple a solution (as in “A scientific theory should be as simple as possible, but no simpler.” — AlbertEinstein).

Modeling Repository with CDO

Eike Stepper was glad, though. It gave him a perfect opportunity to present CDO which solves exactly this problem. CDO connects a client to a repository server. Any change to the model in the client is sent to the server, applied and then confirmed for all connected clients. So things like scalability, remote access and multi-client support come for free.

Over the years, CDO has collected a big number of connection modes like replication and an off-line mode. They even solve problems like processing lists with millions of elements. Promising.

One problem Eike mentioned are the default EMF editors. Not reusable, not exactly user-friendly. Since that didn’t change for the last four years, it’s probably something the modeling community doesn’t deem “important.” For some people, XML is apparently good enough.

Project Dawn is trying to improve the situation.

User Story: Successful Use of MDSD in the Energy Industry

RWE (one of the largest European energy companies) showed how they used model driven software development (MDSD) to create software to automatically handle all the use cases of their energy network. He stressed the fact that without strict rules and their application, MDSD will fail just like any other methodology. Do I hear moaning out of the agile corner? 😉

Anyway. My impression was that these guys don’t come up with stup…great new ideas every five minutes and expect that they are already implemented. Delivering electricity isn’t something that you entrust just on anybody. These people are careful to start with. So I see it that there are in fact industries where strict rules work. Anyway, MDSD is another arrow in the quiver. Use it wisely.

User Story: Nord/LB – Modeling of Banking Applications with Xtext and GMF

The last speaker was from Nord/LB, a German bank. He dropped a couple of remarks about GMF. Seems like he hit some of the gaps mentioned earlier.

Their solution included several DSLs which allowed them to describe the model, the UIs, the page flow in the web browser, etc. Having seen Enthought Traits, I’m wondering which approach is better: Keep everything in a separate model (well, Xtext can track cross-references between models just like the Java editor can) or put all the information in a single place.

If you keep everything in a single place (i.e. every part of the model also knows what to tell the UI framework when it wants to generate the editors), that makes the description of the model quite big and confusing. The information you want to see is drowned in a dozen lines. If you keep the information separate, you must store that in your memory when you switch editors.

I guess the solution is to create an editor which can display that part of the information which you need right now.

The Reception

After the talks, I had a long talk with Eike Stepper and Ed Merks. One of my main issues is that models are pretty static. You can’t add properties and methods to it at runtime. At least not to Ecore-based models. Or maybe you could but you shouldn’t. Which seems odd to me. We have plug-in based architectures like Eclipse. We have XML which stands for Extensible Markup Language. Why does modeling have to start in the stone age again without support for model life-cycle, migration, evolution?

When I presented my use case to Eike, he said “never heard that before.” So either the modeling community is going for the long hanging fruit or my use cases are exceptional. All I’m asking is a model which I can attribute with additional information at runtime. Oh, yes, I could use EMF annotations for this. Which EMF default editor supports that? Hm. So if my users want to extend the EClass “Person” with a middle name? Something that HyperCard could do, hands down, in 1987?

 


Jazoon 2010 Day 1

2. June, 2010

So, this is the great wrap-up of Jazoon 2010, day 1. What did I have?

The keynote by Danny Coward

Java SE and JavaFX: The Road Ahead. After the acquisition by Oracle, everyone was curious as to what happens to Java. Unfortunately, the slides aren’t online, yet but from my faint memory, we might get closures after all and with a sane syntax, too. Plus all the stuff mentioned on Sun’s JDK 7 page. ATM, this stuff is a bit fluent and it’s hard to get a definitive list but something is moving at least.

From my point of view, closures and all the other language features are too late for the Java language (important companies won’t upgrade to Java 7 and time soon, some of them even cling to 1.4!) but the implementation in the main language of the Java VM will allow to build better and faster non-Java languages on top of the VM. Now if the VM would include a compiler API to build JNI code for native libraries on the fly, we would have a worthy challenger for .NET. Yeah … I know. A man can have dreams, okay?

And there was some talk about JavaFX. It seems that the technology is starting to reach its beta-phase, soon (see my notes for the second day). He showed one demo: Geo View of Vancouver 2010. It’s a world map with which country won how many medals and when you open one of the blobs, you get the names of the athletes in a fan-out widget. You can click on the name to get more information (like the photo) or you can compare the results against countries with the same number of athletes or population or closest GDP or just closest geographically. It gives a nice example how to visualize a lot of data and wade through them intuitively.

Client / Server 2.0 with Java and Flex by James Ward

James showed how you can use Flash and a Java server to build really nice web apps. He showed several examples: A few lines of code to build a UI which runs on an Android mobile phone, in the web browser and on the desktop. All with really nice performance. One was the insurance company demo. Just enter some arbitrary data until you come to the damage details and incident report. They show new ways to enter information which make the tool usable to anyone who can recognize a car and a top-view of a street.

If you like what you see, you should probably take the Tour de Flex. It shows off a whole lot of stuff. Also try the Tour de Flex Dashboard. It shows you in real time who looks at what part of the TdF right now.

Blueprint – Modern Dependency Injection for OSGi by Costin Leau

Another DI system, this time tied to OSGi. Nothing really exciting here. The talk was okay but the speaker soon lost my interest.

One thing to note: Eclipse 4 comes with a different DI system. I wonder if they will drop that in favor of the new OSGi standard in 4.1.

Patterns and Best Practices for building large GWT applications by Heiko Braun

I went to see this but quickly realized that I’ve heard the talk before at the JUGS. Here is the link to the slides. As a result of his experience he started project errai which collects best practices to build large GWT applications.

Objects of Value by Kevlin Henney

One of the main weak points on software development is that we don’t know what we’re talking about. When my project manager comes to me and asks “When are you done?” my answer is “Soon” … Right 😉 Or think about strings. Everyone else on the planet calls it “text”.

Obviously, Kevlin had a lot of fun on stage and so had we. In essence, “Objects of Value” or “Value Objects” are even more simple than POJOs (think Integer class). The main reason to use them is to make your code more expressive and readable. Instead of

public User (String name, String firstName, int age, String zipCode, String city)

you (can) create a couple of value objects:

public User (Name name, FirstName firstName, Age age, ZipCode zipCode, City city)

This may sound ridiculous (and it is in this example) but in a lot of places, using String is just a form of bad laziness (the kind of laziness which leads to maintenance problems later). One of the advantages of the approach above is that you notice when you mix last and first name because the compiler will tell you. The major disadvantage is that it leads to a class explosion. Not to an instance explosion since we just replace a String value object with something that tells us what we have, though.

In addition to that, Java isn’t really meant for these kinds of objects. There is a lot of boiler plate code to define value objects and to use them. But if you have a system that is sufficiently complex and you use a value with a unit in many places (think of a currency value), you should really consider to replace the String+BigDecimal combination with a value object.

Many important points of his talk can be found in the paper Objects of Value on his homepage.

This concludes the first part of my Jazoon 2010 report. Go on with part 2.


Using Mercurial with Dropbox

17. April, 2010

If you want to take a Mercurial repository with you, you have several options:

  1. Create a server somewhere. Don’t forget to install all the security patches.
  2. Use an USB stick. Don’t forget it somewhere (like at home) and don’t forget to always push your changes onto it.
  3. Use Dropbox

Dropbox is a file server in the cloud. While they swear your data is save (“All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password.” – see the features), it’s better to be safe than sorry. Also, Dropbox can’t really cope with the fast changes to the virtual filesystem done by Mercurial (this will lead to corrupt repositories and missing changesets).

The solution is to create a TrueCrypt container in your Dropbox. Dropbox won’t be able to see any changes as long as the container is mounted. When you dismount the container, Dropbox will check the file for changes (if you write to the container, TrueCrypt just modifies a few sectors). So even if you create a 100MB container, only the initial sync will be slow.

There are few obstacles, though:

  1. You must remember to mount the container, and push your changes into it.
  2. If you forget to dismount and push changes into the container on a different computer, you’ll see two containers. In this case, mount the second container somewhere, merge the changes using Mercurial and then commit to the original container.
  3. You must install TrueCrypt and Dropbox on all computers where you want to use this.
  4. The cycle “mount-push-dismount” becomes tedious over time.
  5. If you use HgEclipse, the plug-in will forget the local paths if you forget to mount the container before you start Eclipse.

The OSS dilemma

7. April, 2010

Disclaimer: IANAL

In his post about EPL, GPL and Eclipse plugins (“EPL/GPL Commentary“), Mike Milinkovich says:

What is clear, however, is that it is not possible to link a GPL-licensed plug-in to an EPL-licensed code and distribute the result. Any GPL-licensed plug-in would have to be distributed independently and combined with the Eclipse platform by an end user.

Which is probably true because of the incompatible goals of the two licenses: The EPL was designed by companies, which make a lot of money with software, to protect the investments in the source code they contribute to an OSS project. Notice “a lot of money.”

The GPL was designed to make sure companies can’t steal from poor OSS developers and sell a product as their own or take some source code, add a few lines of code and then sell it as their own, etc. The GPL, unlike the EPL, is made as a sword to keep people away who don’t want to share their word under the GPL.

As such, both licenses work as designed and they are incompatible because their goals are incompatible. We as OSS developers can whine and complain that there is no legal way to build an Eclipse plugin for Subversion without first creating an Subversion client which is EPL licensed but that doesn’t change the fact that it is illegal. It’s the price we pay for the freedom we have. If the licenses were different, there would be legal loopholes.

Yes, it sucks.