Jazoon 2012: Architecting non-trivial browser applications

28. June, 2012

Marc Bächinger gave a presentation how to develop HTML5 browser applications.

The big advantage of HTML5+JavaScript is that it gives users a better experience and usability. One of the first steps should be to decide which framework(s) you want to use. You can use one of the big, monolithic, one-size-fits-all frameworks that do everything or select best-of-breed frameworks for specific aspects (browser facade, MVC framework, helper libraries and components).

You should use REST on the server side because that makes the server and the components of your application easier to reuse.

The main drawback is that you have (often much) more complexity on the client. This can be controlled by strict application of the MVC pattern.

Browser facades

Every browser has its quirks and most of the time, you just don’t want to know. Browser facades try hard to make all browsers similar. Examples are jQuery and zepto.js

MVC frameworks

Backbone.js, Spine.js, Knockout.js, ember.js, JavaScriptMVC, Top 10 JavaScript MVC frameworks

Helper libraries and frameworks

gMap, OSM, Raphaël, jQuery UI, Twitter bootstrap.js, mustache, jade

Important

Since the whole application now runs in the client, security is even more important since attackers can do anything that you don’t expect.


Jazoon 2012: How to keep your Architecture in good Shape?!

28. June, 2012

Ingmar Kellner presented some tips how to prevent your architecture rotting into a mess. When that happens, you will have these problems:

  • Rigidity – The system is hard to change because every change forces many other changes.
  • Fragility – Changes cause the system to break in conceptually unrelated places.
  • Immobility – It’s hard to disentangle the system into reusable components.
  • Viscosity – Doing things right is harder than doing things wrong.
  • Opacity – It is hard to read and understand. It does not express its intent well.

(Robert C. Martin)

According to Tom DeMarco, your ability to manage this depends on control. And control depends on measurements – if you can’t measure something, you can’t control it.

How rotten is your software? Look for cycle groups (some package X depends on Y depends on Z depends on A depends on X):

  • They tend to stay
  • They tend to grow
  • They are a strong smell

Ingmar showed some examples in the JDK 6 (lots of cycles) and ActiveMQ (lots of cycles in 4.x, much better in 5.0 but again growing since then).

What can you do?

Use a consistent “architecture blueprint” that makes it obvious which layer/slice can use what. In the blueprint, layers are horizontal (presentation, domain, persistence) and slices are vertical (everything related to contracts, customers, users, and finally common code).

You will need someone with the role “Architect” who “defines the architecture, thresholds for coding metrics, identifies ‘hot spots'” and developers who “implement use cases, respecting the architecture and coding metrics thresholds.” All this is verified by a CI server.

At the same time, avoid “rulitis” – the false belief that more and stricter rules makes things “better.”

Some rules you might want to use:

  • The blueprint is free of cycles
  • Package naming convention that matches the blueprint
  • Control coupling and cycles with tools
  • Use tools to control code duplication, file size, cyclomatic complexity, number of classes per package, etc.
  • Reserve 20% of your time for refactoring

Following these rules can help to reduce costs during the maintenance phase:

  • 50% less time
  • 50% of the budget
  • 85% less defects

according to a study conducted by Barry M. Horowitz for the Department of Defense.


Jazoon 2012: Why you should care about software assessment

28. June, 2012

Tudor Girba gave a presentation at the Jazoon about a topic that is very dear to him: Software assessment. To quote:

What is assessment? The process of understanding a given situation to support decision-making. During software development, engineers spend as much as 50% of the overall effort on doing precisely that: they try to understand the current status of the system to know what to do next.

In other words: Assessment is a process and a set of tools to help developers to make decisions. They typical example is a bug shows up and you need to fix it. That raises the usual questions:

  1. What happened?
  2. Why did it happen?
  3. Where did it happen?
  4. How can I fix it?

As we all know, each of these steps can be difficult. As an extreme example, someone mentioned selling software to the NSA. It crashed. The NSA calls the developer:

NSA: “There is a problem with your software.”

You: “Who am I talking with?”

NSA: “Sorry, I can’t tell you that.”

You: “Well … okay. So what problem?”

NSA: “I can’t tell you that either.”

You: “… Can you give me a stack trace?”

NSA: “I’m afraid not.”

Unlikely but we all know similar situations. Even seasoned software developers are guilty of giving completely useless failure reports: “It didn’t work.” … “What are you talking about? What’s ‘it’?”

Tudor gave some nice examples how he used simple assessment tools that allow him to query log files and sources of some application to locate bugs, locate similar bugs and help to find out why some part doesn’t behave well. Examples:

  1. An application usually returns text in the user’s language but some rare error message is always in German. Cause: When the error message was created, the code called Locale.getDefault()
  2. Several other places could be found that showed the same behavior by searching the source code for places where Locale.getDefault() was called either directly or indirectly. A test case was added to prevent this from happening again.
  3. Some cache would have a hit ratio of less than 50%. Analyzing the logs showed that two components used the same cache. When each got their own cache, the hit ratios reached sane levels.

So assessments allow you to do strategic planning by showing you all the dependencies that some part of the code has (or the whole application).

In a spike assessment, you can analyze some small part to verify that a change would or could have the desired effect (think performance).

Did you know that developers spend about 50% of the time reading code? If tools can help them understand some piece of code faster, that makes them more productive. Unfortunately, today’s tools are pretty limited when it comes to this. Eclipse can show me who calls Locale.getDefault() but it can’t show me indirect calls.

Worse: If the developer makes the wrong decision because she couldn’t see all the important facts, then these often have a huge impact.

Another important aspect is how you use metrics. Metrics are generally useful but the same is not true for every metric. Just like you wouldn’t copy unit tests from one project to the next, you need to reevaluate the metrics that you extract from each project. Some will just be a waste of time for certain projects.

My comments:

We really, really need better tooling to chop data. IDEs should allow me to run queries against my source code, collect and aggregate data and check the results in unit tests to validate design constraints.

It was also interesting to see how Tudor works. He often uses simple words which can be misleading. But when you look at the slides, then there was this graph about some data points. Most graphs show a linear Y axis with the ticks evenly spread. He uses a different approach:

Usual diagram to the left, Tudor’s version to the right

Related links:


Jazoon 2012: Large scale testing in an Agile world

28. June, 2012

Alan Ogilvie is working at a division of IBM responsible for testing IBM’s Java SE product. Some numbers from his presentation:

  • A build for testing is about 500MB (takes 17 min to download to a test machine)
  • There are 20 different versions (AIX, Linux, Windows, z/OS * x86, power, zSeries)
  • The different teams create 80..200 builds every day
  • The tests run on heaps from 32MB to 500GB
  • They use hardware with 1 to 128+ cores
  • 4 GC policies
  • More than 1000 different combinations of command line options
  • Some tests have to be repeated a lot of time to catch “1 out of 100” failures that happen only very rarely

That amounts to millions of test cases that run every month.

1% of them fail.

To tame this beast, the team uses two approaches:

  1. Automated failure analysis that can match error messages from the test case to known bugs
  2. Not all of the tests are run every time

The first approach makes sure that most test failures can be handled automatically. If some test is there to trigger a known bug, that shouldn’t take any time from a human – unless the test suddenly succeeds.

The second approach is more interesting: They run only a small fraction of the tests every time the test suite is started. How can that possibly work?

If you run a test today and it succeeds, you will have some confidence that it still works today. You’re not 100% sure but, well, maybe 99.5%. So you might skip this test today and mark it as “light green” in the test results (as opposed to “full green” for a test that has been run this time).

What about the next day? You’re still 98% sure. And the day after that? Well, our confidence is waning fast, so we’re still pretty sure – 90%.

The same goes for tests that fail. Unless someone did something about them (and requested that this specific test is run again), you can be pretty sure that the test would fail again. So it gets light red unlike the tests that failed today.

This way, most tests only have to be run once every 4-5 days during development.

Why would they care?

For a release, all tests need to be run. That takes three weeks.

They really can’t possibly run all tests all the time.


Jazoon 2012: Development Next – And Now For Something Completely Different?

28. June, 2012

Dave Thomas gave the keynote speech (link) about how technology seems to change all around us just to show up the same, old problems over and over again.

Things that I took home:

  • In the future, queries will me more important than languages
  • Big data is big

Some comments from me:

How often were you irritated by how source code from someone else looked? I don’t mean sloppy, I mean indentation or how they place spaces and braces. In 2012, it should be possible to separate the model (source code) from the view (text editor) – why can’t my IDE simply show me the source in the way that I like and keep the source code in a nice, common format? (see Bug 45423 – [formatting] Separate presentation from formatting)

And how often have you wondered “Which parts of the code call Locale.getDefault() either directly or indirectly?”

How often did you need to make a large-scale change in the source code which could have been done with a refactoring in a few minutes – but writing the refactoring would have taken days because there simply are no tools to quickly write even simple refactorings in your IDE?

Imagine this: You can load the complete AST of your source code into a NoSQL database. And all the XML files. And the configuration files. And the UML. And everything else. And then create links between those parts. And query those links … apply a piece of JavaScript to each matching node …

Customers of my applications always want new business reports every day. It takes way too long to build these reports. It takes too much effort to change them. And it’s impossible to know whether a report is correct because no one can write test cases for them.


e4 is here

28. June, 2012

Unless you live under a rock or you’re not using Eclipse, you can’t possibly have missed it: Eclipse Juno (or 4.2)

If you’re unsure what happened to 3.8 or 4.0 and 4.1, here is the story in a nutshell: A small team of Eclipse developers was concerned about the state of the platform (platform is the part of Eclipse that knows what a plug-in is, how to find, install and load them and arrange them in a neat way called “perspective”). When the platform was developed 2004, it was based on VisualAge. Don’t ask. Like all “visual” tools, it had … issues. Many of them were hardcoded into the platform. For example, there are many singletons in there. Many. Testing the platform is either a nightmare or slow or both.

The e4 team decided that something needed to be done. Only, it was risky. And it would take a long time. Which meant that the PMCs probably wouldn’t approve.

Therefore, they came up with a cunning plan: Develop a new platform (e4) alongside the official Eclipse releases. e4 would get a “compatibility layer” that would allow to run the old junk (like the JDT) but it would also allow to write plug-ins in a new, clean, understandable way. For example, in 3.x, you can use this code to get the current selection:

ISelection selection = getSite().getWorkbenchWindow().getSelectionService().getSelection();
Object item = ((IStructuredSelection)selection).getFirstElement();

The same code looks a bit different in e4:

@Inject
void setSelection(@Optional @Named(IServiceConstants.ACTIVE_SELECTION) Contact contact) {
    ...
}

e4 will call this method every time the user selects an instance of Contact in the UI.

The whitepaper gives some more details. Lars Vogel wrote a pretty long Eclipse 4 RCP tutorial.

After two years, e4 was deemed stable enough to make it the default release.

Conclusion: e4 doesn’t get rid of all the problems (p2, OSGi and SWT are still there) but it’s a huge step forward.

Related links: