Eclipse Modeling Day

Figure 4-3: Data Modelling Today

Image via Wikipedia

Yesterday was Eclipse Modeling Day here in Zürich. There were a couple of talks from people who were using modeling for projects and talks from project leaders of modeling projects like EMF and CDO.

Eclipse Modeling Platform for Enterprise Modeling

If you’ve used the Eclipse modeling projects, you’ll know the pain: Where to start? Which project is worth to spend time with? Caveats? Things like that. It seems that’s not a superficial problem. Eclipse Modeling is a big, unsolved jigsaw puzzle. The new project “Eclipse Modeling Platform” sets out to close the major gaps in the next two years. On the road map are things like authentication, large scale models, comparing models, etc.

For me, the list of topics looked more like an MBA’s wish-list than something that will make life easier for software developers. Their standpoint was that the funders call the shots. My standpoint is that the we need tools to help us solve the basic issues like good editors for (meta-)models and a useful debugging framework for code generators.

Interesting projects: Sphinx and Papyrus.

User Story: Models as First Class Citizens in the Enterprise

Since many people didn’t seem to be aware that modeling can do, Robert Blust (UBS AG) showed an example. Like most banks, the UBS has tons of legacy code. And tons of rules. Rules like: Any application A must access data of another application B via a well-defined interface. Their product would collect a couple of gigabytes of data from old COBOL code and use that to determine dependencies (like the DB tables it uses).

The next step would be to define which tables belong to which application and the end result is an application which can show and track you rule violations. Or which can show a Java developer which tables he must care for if he has to replace an old COBOL application.

There was the question of authentication: Who can see what of the model? This is going to be some work to solve in a way that it’s still manageable. For example, a part of the model could be accessible via a roles-based model. A software developer should be able to see all the data which is relevant to his project. But what about bug reports? Should a reporter be allowed to see all of them? What about the security related ones?

If we go to fraud tracking, individual instances in the model might be visible to just a very few people. So authentication is something which needs to scale extremely well. It must be as coarse of fine-grained as needed, sometimes the whole range in a single model.

Eclipse Modeling Framework for Data Modeling

Ed Merks introduced EMF. Not much new here for me. I tried to talk to him during the coffee break but he was occupied by Benjamin Ginsberg. Benjamin was interested to get a first rough view on modeling. Apparently, I made some impression on him, because he came back later to see me.

Textual Modeling with Xtext

Sven Efftinge showed some magic using Xtext: He had his meta-model open in two editors, a textual and a graphical one. When he changed something in the graphical view, it would show up in the text editor after save. Nice. I couldn’t ask him how much code it took to implement this.

Under the hood, Xtext uses Guice for dependency injection.

Graphical Modeling with Graphiti

Michael Wenz from SAP showcased Graphiti. It’s a graphical editor framework for models like GMF but I guess there is a reason why SAP invented the wheel again. Several people at the event mentioned GMF unfavorably. I’m not sure why that is but I remember that EMF generated huge, non-reusable blobs of Java code when I asked it to generate an editor for my models. Ed wasn’t exactly excited when I asked to change that.

Graphiti itself looks really promising. The current 0.8 is pretty stable and has a graphical editor for JPA models which allow to define relations between instances via drag-and-drop. No more wondering which side is the “opposite.” It also creates all the fields, gives them the right types, etc. From the back of the room, it looked like a great time-saver.

User Story: The Usage of Models in an Embedded Automotive IDE

A guy from Bosch showed some real-life problems with modeling, especially with performance. They have huge models. Since they didn’t look at CDO, their editors had to load the whole model into RAM. Since Java can only allocate 1.5GB of RAM on a 32-bit hardware, they are at the limit of what they can handle (some projects have 400MB sources).

It’s a good example how an existing technology could have made their lives easier if they only knew about it. Or maybe EMF is too simple a solution (as in “A scientific theory should be as simple as possible, but no simpler.” — AlbertEinstein).

Modeling Repository with CDO

Eike Stepper was glad, though. It gave him a perfect opportunity to present CDO which solves exactly this problem. CDO connects a client to a repository server. Any change to the model in the client is sent to the server, applied and then confirmed for all connected clients. So things like scalability, remote access and multi-client support come for free.

Over the years, CDO has collected a big number of connection modes like replication and an off-line mode. They even solve problems like processing lists with millions of elements. Promising.

One problem Eike mentioned are the default EMF editors. Not reusable, not exactly user-friendly. Since that didn’t change for the last four years, it’s probably something the modeling community doesn’t deem “important.” For some people, XML is apparently good enough.

Project Dawn is trying to improve the situation.

User Story: Successful Use of MDSD in the Energy Industry

RWE (one of the largest European energy companies) showed how they used model driven software development (MDSD) to create software to automatically handle all the use cases of their energy network. He stressed the fact that without strict rules and their application, MDSD will fail just like any other methodology. Do I hear moaning out of the agile corner? 😉

Anyway. My impression was that these guys don’t come up with stup…great new ideas every five minutes and expect that they are already implemented. Delivering electricity isn’t something that you entrust just on anybody. These people are careful to start with. So I see it that there are in fact industries where strict rules work. Anyway, MDSD is another arrow in the quiver. Use it wisely.

User Story: Nord/LB – Modeling of Banking Applications with Xtext and GMF

The last speaker was from Nord/LB, a German bank. He dropped a couple of remarks about GMF. Seems like he hit some of the gaps mentioned earlier.

Their solution included several DSLs which allowed them to describe the model, the UIs, the page flow in the web browser, etc. Having seen Enthought Traits, I’m wondering which approach is better: Keep everything in a separate model (well, Xtext can track cross-references between models just like the Java editor can) or put all the information in a single place.

If you keep everything in a single place (i.e. every part of the model also knows what to tell the UI framework when it wants to generate the editors), that makes the description of the model quite big and confusing. The information you want to see is drowned in a dozen lines. If you keep the information separate, you must store that in your memory when you switch editors.

I guess the solution is to create an editor which can display that part of the information which you need right now.

The Reception

After the talks, I had a long talk with Eike Stepper and Ed Merks. One of my main issues is that models are pretty static. You can’t add properties and methods to it at runtime. At least not to Ecore-based models. Or maybe you could but you shouldn’t. Which seems odd to me. We have plug-in based architectures like Eclipse. We have XML which stands for Extensible Markup Language. Why does modeling have to start in the stone age again without support for model life-cycle, migration, evolution?

When I presented my use case to Eike, he said “never heard that before.” So either the modeling community is going for the long hanging fruit or my use cases are exceptional. All I’m asking is a model which I can attribute with additional information at runtime. Oh, yes, I could use EMF annotations for this. Which EMF default editor supports that? Hm. So if my users want to extend the EClass “Person” with a middle name? Something that HyperCard could do, hands down, in 1987?

 

4 Responses to Eclipse Modeling Day

  1. Hendy Irawan says:

    Regarding dynamically altering metamodels at runtime, I haven’t tried it yet but I have a bunch of use cases for that.

    I’d like the “original” metamodels to be “mostly” fixed, at least the source .ecore files.

    I’d like to be able to have a plugin that when installed, it will dynamically add “MiddleName” to “Person” class.

    Another requirement is I want it to be somewhat RDBMS friendly, in that enabling/disabling dynamic attributes does not cause RDBMS schema changes. (although, the first time the plugin is installed, it needs to create some tables for storage)

    I’m thinking of a way to do this would be to create another “fragment” metamodel that contains the “IMiddleName” class/interface.

    At runtime, these objects (“IMiddleName fragments”) will live as separate EMF EObjects, and as EObjects these fragments can be edited using their respect EMF editors.

    However to find out whether a “John Smith” Person class has a middle name (fragment), I have to call a proprietary API (with all the disadvantages that it won’t be integrated with any EMF editor). Probably something like this:

    class Person {
    public EObject getFragment(EClass fragmentClass);
    }

    The getFragment implementation can be provided during model generation, that can simply delegate to a global implementation.

    Is this just the same (but dumber?) approach as using EMF annotations?

    • digulla says:

      That all sounds pretty reasonable. I’m also leaning towards a “keep some stuff static and slap a dynamic layer onto it.” Ed isn’t 100% against it; it’s just that there never was a project that really needed that.

      Which sounds odd; are all modeling projects out there so static that no customization of the model at runtime is necessary? Has everyone just accepted that this doesn’t work or do they simply regenerate their models when something changes?

      • Hendy Irawan says:

        “are all modeling projects out there so static”

        I would guess that is the case.

        It’s possible that the Java classes being used aren’t so “static”, e.g. by using AspectJ load-time weaving / Equinox Aspects it’s practical to enhance classes at run-time. However, from EMF point-of-view, the metamodel is unchanged.

        The only domain where I think such runtime extensibility is needed for EMF is for ERP-type applications.

        Even so, it’s possible (maybe easier than ‘slapping a dynamic layer’) to embed the EMF toolkits themselves so that the end-user can edit the .ecore/.ecorediag files (to add the middleName: EString attribute), then the regeneration of Java classes etc. is “part of the runtime”.

        In a way, making the end-user part of the development feedback loop. 🙂

  2. digulla says:

    That feels like such an obvious thing, why isn’t it supported?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s