TNBT – Creating Tests from the Debugger

20. July, 2015

From my series “The Next Best Thing“:

Often, you will find yourself in a debugger, trying to follow some insanely complicated code to find the root cause of a bug.

I would like to see a button in my IDE which reads “New Test Case”. It should example the current program state, determine (with my help) what part of the code I want to test and then copy the current state into a unit test. In a second step, I could then trim down the unit tests but I would have all the input, all necessary dependencies would be there, correctly initialized.

It would also be great if the IDE would track the state of the code from which the unit test was generated. If the code changes too much (indicating that the unit test might become outdated), I’d like to see that. Or maybe the IDE could figure out by itself when code tested in such a way deviates “too much.”

Along the same lines, the IDE should be able to inject probes into the product code. As I click buttons and enter data in the UI, the IDE should generate a series of unit tests under the hood as described here. If you’re using frameworks like Spring, the tests should come with minimal (or mocked) dependencies.


TNBT: Proactive IDEs

13. February, 2015

Imagine this situation: You’re working on some code and you get an exception when you run the unit tests. Next to the output is a link with the text: “User Joe had the same exception two months ago and fixed it with the commit b8cfda02.”

How would that work? We’re using big data for all kinds of things, tracking customer happiness, searching the Internet and discovering terrorist threats (or not).

Standard development teams have about 10 people. That means you have a super computer with 40-80 cores, 160 GB of RAM and 20 TB of disk space connected with a fast LAN in your office already. That beast is usually idling while it waits for the developers to press keys. It would be pretty simple to install a clustered log analyzer on this hardware which simply reads all the log files and reports which Maven and running JUnit test creates. It would be as simple to connect the same database to your version control. That means this system could track all the errors and exceptions that you get when you run unit tests or the whole application.

This information could then be used to detect when someone in the team gets a new exception plus the change sets which fixes them. If the system detects an exception which it has seen before, it can tell you which developer has fixed it or who is currently working on it – instead of wasting your time, you could see the code which contains the solution or ask someone who has already solved the problem.

With proper filtering, the data could be split into internal and framework code. That way, the system could report to library projects where consumers struggle most.

On the large scale of things, this system can tell you which parts of the system are most brittle.

As usual with big data, there are some downsides. The same system would tell you which developer breaks the code most often. Who writes the worst code. If your manager isn’t able to see the human value in his charges, this might not be your best bet.

Related Articles:

  • The Next Best Thing – Series in my blog where I dream about the future of software development

TNBT: Documentation Sucks

11. October, 2012

Documentation is the unloved step-mother of software development: Nobody likes it.

On the writing side, documentation is either considered a waste of time:

  • I could write code in that time
  • It won’t be valid anymore after the code changes tomorrow, anyway
  • There is no way to make sure you can trust documentation
  • Stringing sentences is hard work, especially when you want to make them easy to read, understand and interesting to follow.
  • It’s hard to connect code samples with documentation
  • If I describe too many details, readers will be bored. If I omit too many, they will be confused. There is no way to know which level of detail is good.

On the reading side, it’s a waste of time:

  • I need to solve a problem, I don’t have time to search in a huge dump of text
  • If the author doesn’t trust the documentation, how can I?
  • It will contain too many details that I already know and omit too many facts that I need to understand what is going on.

The core of the issue is that documentation and code are two different things. Documentation is, by nature, abstract. It’s at least one step removed from the solution.

Does it have to be that way?

I hope, with new technologies like Xtext or JetBrains’ Meta Programming System, we will eventually be able to turn documentation into code.

So instead of writing hundreds of lines of code to open a window, give it a size, make sure it remembers its size and position, etc., we could write:

Allow the user to edit a Customer object which has properties from foaf:Person and one or more addresses.

Users can search for Customer objects by any of the name fields.

Note that the links are part of the documentation and the code; the underlying code generator should follow them and examine the code/documentation on the other side.

Related Articles:


TNBT: Bringing Code Together

12. July, 2012

If you develop web apps, you have a workflow like this:

  • Repeat forever
    • Edit code
    • Deploy to server
    • Check in browser
    • Tweak HTML/CSS in browser
    • Find the location in the code which is responsible

Sucks? Yes. But until recently, there simply wasn’t a better way to do it. Only Eclipse allows to run am embedded web browser in your IDE but there is no connection between the code and the output. There usually isn’t a connection between related parts of the code. Or can you see all the relevant CSS styles while you edit code that generates an HTML tag? I mean: Can you see the CSS styles for “.todo” when you hover your mouse over code that means “send ‘class=”todo”‘ to the browser”?

Meet Brackets and see how awesome your IDE could be. If seeing is believing, here is the video:

Related Articles:

  • The Next Best Thing – Series in my blog where I dream about the future of software development

TNBT – Avoiding Common Errors

7. July, 2011

Writing secure code is ever more important. There are lots of examples: HBGary, Sony, Google.

Even if you’re not one of the biggest companies out there, security starts to become important as soon as your code can be accessed from the Internet. And frankly, which code today can’t?

What’s worse, the problems are always the same: SQL injections, not validating input, using code from somewhere else which is vulnerable. These problems are neither hard to find nor hard to fix. It’s only too much effort to add the necessary checking and warning code to the existing compilers.

So here is my assumption for my “The Next Best Thing” series of articles: The programming language will allow to define patterns like FindBugs and PMD that the compiler will check at compile time and which the VM checks at runtime to fix or at least warn about such security problems.

With tools like MoDisco and Moose, it’s possible to go one step further: It could analyze and display the code in ways that you haven’t seen before (think Code City) to find patterns in the code automatically and warn you about something that you might not have realized, yet.

For example, if you use a certain call sequence everywhere in your code but one place, it’s probably worth a look.

Of course, this begs for a way to add lots of additional information to source code. As I said before, we probably want better editors than the plain text editors we have today. It should be possible to include images and formulas in code. Wiki documentation. And things like “yeah, I know, this is different from the 365 other places!”

Sounds a bit like annotations but frankly, Java source code can just get you so far. DSLs come to mind but they don’t allow to extend them with arbitrary extra bits of information. It should be possible to overlay a DSL with another DSL so you can mix various information in one place.

Related Articles:

  • The Next Best Thing – Series in my blog where I dream about the future of software development

TNBT – Object Teams

15. March, 2011

Object Teams, or OT/J for short, is a solution for the old Java problem “there is no I in ‘team'”: Most Java code is written as if the whole world was openly hostile. It’s riddled with final, private static, singletons, thousands of lines of code which almost do what you need except for this one line .. that you can’t change without copying the other 999.

Groovy’s solution: AST transformation. A topic for another post.

OT’s solution: create a Java-like programming language which allows you to extend code that isn’t meant to. A great example: Extending Eclipse’s Java compiler.

The Eclipse Java compiler is one of the most complex pieces of code in Eclipse (“5 Mbytes of source spread over 323 classes in 13 packages“). Unlike other compilers, it can compile broken code. The same technology is used to create byte code and error markers in the editor.

Stephan Herrmann wanted to add support for @NonNull and @Nullable. Usually, you’d create a branch, keep that branch in sync with the main branch, etc. Tedious. For every change that someone makes in the main branch, you must update your development branch. Even if the change is completely unrelated. CVS has a very limited concept of “related”. DVCS like Git or Mercurial are better at merging but they also don’t understand enough of Java to give the word “related” a useful meaning. “Same file” is the best you can get.

So instead of the tedious way, he used OT/J to create an OT/Equinox plug-in which patches the JDT compiler byte code. Sounds dangerous? Well, OT/J does all the ugly work. You just say “when this method is called, do this, too.” Sounds a bit like AOP? Yes.

Unlike AOP, it communicates intent more clearly. The code wasn’t designed to be the most compact way to define a “point cut” and then leave it to the reader to understand what this is supposed to mean. It better communicates the intent.

I’m not completely happy with the syntax, though. I don’t have specific points, only a general wariness. Maybe a careful application of Xtext would help.

Related Articles:

  • The Next Best Thing – Series in my blog where I dream about the future of software development

TNBT: JetBrains’ MPS

10. March, 2011

Disclaimer: I’m not a fan of IntelliJ IDEA.

In the past, I’ve always had an eye for people who replaced the ASCII text editor with something … better. Imagine you could use a table to define your constants in Java. And with table, I mean “Excel” not “align-with-space-until-it-you-go-insane.”

JetBrains is working on this: Table support in MPS 2.0

Let me make this clear: A DSL is nice. But there are so many things that you simply can’t express well with text. State machines. Repeated code. Sometimes, you don’t need the exact words to convey the idea.

I think I’ll waste some time with MPS 2.0 M3 next weekend. There are a couple of tutorials and demos.

Related Articles:

  • The Next Best Thing – Series in my blog where I dream about the future of software development

TNBT: Persistence

19. February, 2011

In this issue of “The Next Big Thing”, I’ll talk about something that every software uses and which is always developed again from scratch for every application: Persistence.

Every application needs to load data from “somewhere” (user preferences, config settings, data to process) and after processing the data, it needs to save the results. Persistence is the most important feature of any software. Without it, the code would be useless.

Oddly, the most important area of the software isn’t a shiny skyscraper but a swamp: Muddy, boggy, suffocating.

Therefore, the next big thing in software development must make loading and saving data a bliss. Some features it needs to have:

  • Transaction contexts to define which data needs to be rolled back in case of an error. Changes to the data model must be atomic by default. Even if I add 5,000 elements at once, either all or none of them must be added when an error happens.
  • Persistence must be transparent. The language should support rules how to transform data for a specific storage (file, database) but these should be generic. I don’t want to poison my data model with thousands of annotations.
  • All types must support persistence by default; not being able to be persisted must be the exception.
  • Creating a binary file format must be as simple as defining the XML format.
  • It must have optimizers (which run in the background like garbage collection runs today) that determine how much of the model graph needs to be loaded from a storage.

Related Articles:

  • The Next Best Thing – Series in my blog where I dream about the future of software development

Distributed editing

23. November, 2010

Among other things, version control systems were invented to allow several people to work on the same code. But there is another option: Distributed editing. Everyone works on the same code at the same time and all changes are sent to all involved users at the same time.

Welcome Saros – Distributed Collaborative Editing and Distributed Party Programming


Better code with MoDisco

18. November, 2010

I’m always thinking about better ways to create software. Create. Sculpt.

Code generation? Maybe. But I’ve left kindergarten. Creating thousands of identical sand pies doesn’t make them more tasty. Or useful.

A few days ago, I had a long talk with a guy at an Xtext presentation. Xtext can create EMF models out of computer languages. Why is that useful? Because you can get at the guts of a language.

Look here:

... some complex setup

for( Item item : list ) {
    ... do something smart with item...
}

But the guy (who wrote this code — that’s me, thank you) made a mistake: For some reasons, items with a value < 5000 should be ignored. No problem, we can simply add that. If we have the source. But imagine you had a tool to “patch” the code. Like so:

...
for( ... ) {
    if( 5000 > item.value() )
        continue;
    ...
}
...

The “…” are actual code. They mean “anything”. So this reads: “Skip code until you find a for loop and add a new condition at the front of the loop. Leave everything else intact.”

Pretty hard to do today. If we had a tool that could create an EMF model for the Java code. Enter MoDisco.

MoDisco can create a model for a software. What all those CASE tools did but the other way around. It creates a model for the software as it is now. Not very useful at first glance but think about the example above.

Or think that you’ve identifier a dangerous pattern in your code base. Now you want to fix it. MoDisco can search for these patterns for you.


%d bloggers like this: