Designing DSLs

16. July, 2012

Hello and welcome to a new series of blogs called “Designing DSLs” or DDSL for short. If you have used or designed a DSL before, then you’ll know that there are a couple of pitfalls. This blog series aims to provide tips how to build “great” DSLs – whatever that might be 😉

What are the most common pitfalls for designers of DSLs?

  • The DSL is too broad
  • The DSL is too limited
  • The syntax has weird quirks (a.k.a. backwards compatibility syndrome)

Why is it so hard to design a great DSL? They should be simple, right?

Well, as Einstein (“Everything should be made as simple as possible, but no simpler“) and Blaise Pascal (“I would have written a shorter letter, but I did not have the time.“) already knew, it’s always easy to make something complicated – simplicity is hard.

On top of that, every mathematical system is either incomplete or inconsistent. And let’s not forget that each DSL is a model, too. And as you might know, all models are wrong but some are useful.

Should we abandon all hope? No. Just always remember that a good DSL is hard work.

First, a general tip: Look at existing examples. There are thousands of examples out there; use them. Knowing several programming languages yourself is a big bonus (everyone should know more than two languages).

“Wait a minute,” I hear you ask, “these are real programming languages!” So? A lot of brainpower went into designing them (or working around shortcomings), which makes them a great source of inspiration. Bonus: A lot of people know these languages which gives you a larger audience to discuss ideas (as opposed to the 3-4 people who will use your DSL in the beginning).


Software Development Costs

14. July, 2012

I’ve prepared a small presentation to give an overview of software development costs.

This diagram describes the costs/gain per feature.

Complexity Curve

The most simple curve, complexity, is easy to understand: Costs go way up as you add features. Adding another feature to an already complex product is way more expensive than adding the first feature to a non-existing product.

Bugs in Final Product

The number of bugs in the final product is harder to understand. As you add features, you also add bugs. The number of bugs per kLOC is an individual constant. We always make the same mistakes and we the number of bugs we create per kLOC is pretty stable, too. The number is different for each person but every developer has their own number and that number doesn’t change much unless external circumstances change dramatically. In fact if you create statistics about bugs found per team member, you can tell how many new bugs there will be after he added N lines of code (see “They Write the Right Stuff“).

That means every product has bugs. If the project isn’t a complete disaster, then the team will have found a way to cope with these. Or to put it another way: If the number of bugs grows too fast, the project will either be canceled or drastic measures will be taken to reduce the flaws again.

This is what the curve means: In the beginning, there will be few bugs because there are only a few lines of code. Remember: number of bugs = lines of code * individual constants. Each line that you don’t write reduces the number of defects.

As time passes, the number of bugs will grow just because lines of code are written. Eventually, that number will either explode or the team will find a way to keep the number in check.

Gain per Feature a.k. ROI

The last curve is for the marketing department. It describes the usefulness of the product for a customer as features are added. A product without features (a.k.a vaporware) is not very useful for a customer. The first feature will be the most useful … or it should be: Why are you wasting your and your customer’s time with features that aren’t the most useful?

But as you add features – and trust me, customers and marketing will try to get as many as they can get – the usefulness doesn’t grow as much anymore. Each feature comes with the cost of complexity: There will be more menu items, dialogs and buttons. The manual will get bigger. The customer will need to remember more to use every feature. That starts with remembering that a feature even exists and goes on with remembering how to use it efficiently.

At the same time, you started with adding the most useful features, right? So additional features, by definition, can’t be as useful as the first ones.

And eventually, the product will contain more features than any single customer cares about. New features will be added for new customers that existing customers don’t care about or that even get in their way (when menu items move around, for example).

This is one reason why everyone feels that Google or Apple products are so easy to use: They work really, really hard to reduce the number of features in their products.

Next week: Bug fixing costs.

Related:


TNBT: Bringing Code Together

12. July, 2012

If you develop web apps, you have a workflow like this:

  • Repeat forever
    • Edit code
    • Deploy to server
    • Check in browser
    • Tweak HTML/CSS in browser
    • Find the location in the code which is responsible

Sucks? Yes. But until recently, there simply wasn’t a better way to do it. Only Eclipse allows to run am embedded web browser in your IDE but there is no connection between the code and the output. There usually isn’t a connection between related parts of the code. Or can you see all the relevant CSS styles while you edit code that generates an HTML tag? I mean: Can you see the CSS styles for “.todo” when you hover your mouse over code that means “send ‘class=”todo”‘ to the browser”?

Meet Brackets and see how awesome your IDE could be. If seeing is believing, here is the video:

Related Articles:

  • The Next Best Thing – Series in my blog where I dream about the future of software development

Sonar: The current batch process and the configured remote server do not share the same DB configuration

10. July, 2012

You might see this error message when starting the Sonar client, for example via Maven (mvn sonar:sonar):

The current batch process and the configured remote server do not share the same DB configuration
- Batch side: jdbc:...
- Server side: check the configuration at http://.../system

The message is a bit misleading. Sonar doesn’t actually compare the database URL, it compares the ID which you can find in the database table properties under prop_key = sonar.core.id and this isn’t an ID, it’s the start time of the Sonar web server:

select * from properties where prop_key = 'sonar.core.id'

There are two reasons why there could be a mismatch:

  1. The database URL on the batch and the server side don’t match (just check it via the URL which the Sonar client gives you)
  2. There are two Sonar servers using this database. This can happen, for example, when you migrated the service from one host to another and forgot to shut down the old version properly.

Jazoon 2012: CQRS – Trauma treatment for architects

4. July, 2012

A few years ago, concurrency and scalability were a hype. Today, it’s a must. But how do you write applications that scale painlessly?

Command and Query Responsibility Segregation (CQRS) is an architectural pattern to address these problems. In his talk, Allard Buijze gave a good introduction. First, some of the problems of the standard approach. Your database, everyone says, must be normalized.

That can lead to a couple of problems:

  • Historic data changes
  • The data model is neither optimized for writes nor for queries

The first problem can result in a scenario like this. Imagine you have a report that tells you the annual turnover. You run the report for 2009 in January, 2010. You run the same report again in 2011 and 2012 and each time, the annual turnover of 2009 gets bigger. What is going on?

The data model is in third normal form. This is great, no data duplication. It’s not so great when data can change over time. So if your invoices point to the products and the products point to the prices, any change of a price will also change all the existing invoices. Or when customers move, all the addresses on the invoices change. There is no way to tell where you sent something.

The solution is to add “valid time range” to each price, address, …, which makes your SQL hideous and helps to keep your bug tracker filled.

It will also make your queries slow since you will need lots and lots of joins. These joins will eventually get in conflict with your updates. Deadlocks occur.

On the architectural side, some problems will be much easier to solve if you ignore the layer boundaries. You will end up business logic in the persistence layer.

Don’t get me wrong. All these problems can be solved but the question here is: Is this amount of pain really necessary?

CQRS to the rescue. The basic idea is to use two domain models instead of one. Sounds like more work? That depends.

With CQRS, you will have more code to maintain but the code will be much more simple. There will be more tables and data will be duplicated in the database but there will never be deadlocks, queries won’t need joins in the usual case (you could get rid of all joins if you wanted). So you trade bugs for code.

How does it work? Split your application into two main parts. One part takes user input and turns that into events which are published. Listeners will then process the events.

Some listeners will write the events into the database. If you need to, you will be able to replay these later. Imagine your customer calls you because of some bug. Instead of asking your customer to explain what happened, you go to the database, copy the events into a test system and replay them. It might take a few minutes but eventually, you will have a system which is in the exact same state as when the bug happened.

Some other listeners will process the events and generate more events (which will also be written to the database). Imagine the event “checkout”. It will contain the current content of the shopping cart. You write that into the database. You need to know what was in the shopping basket? Look for this event.

The trick here is that the event is “independent”. It doesn’t contain foreign keys but immutables or value objects. The value objects are written into a new table. That makes sure that when you come back 10 years later, you will see the exact same shopping cart as the customer saw when she ordered.

When you need to display the shopping cart, you won’t need to join 8 tables. Instead, you’ll need to query 1-2 tables for the ID of the shopping cart. One table will have the header with the customer address, the order number, the date, the total and the second table will contain the items. If you wanted, you could add the foreign keys to the product definition tables but you don’t have to. If that’s enough for you, those two tables could be completely independent of any other table in your database.

The code to fill the database gets the event as input (no database access to read anything from anywhere) and it will only write to those two tables. Minimum amount of dependencies.

The code to display the cart will only need to read those two tables. No deadlocks possible.

The code will be incredibly simple.

If you make a mistake somewhere, you can always replay all the events with the fixed code.

For tests, you can replay the events. No need to a human to click buttons in a web browser (not more than once, anyway).

Since you don’t need foreign keys unless you want to, you can spread the data model over different databases, computers, data centers. Some data would be better in a NoSQL repository? No problem.

Something crashes? Fix the problem, replay the events which got lost.

Instead of developing one huge monster model where each change possibly dirties some existing feature, you can imagine CQRS as developing thousands of mini-applications that work together.

And the best feature: It allows you to retroactively add features. Imagine you want to give users credits for some action. The idea is born one year after the action was added. In a traditional application, it will be hard to assign credit to the existing users. With CQRS, you simply implement the feature, set up the listeners, disable the listeners which already ran (so the action isn’t executed again) and replay the events. Presto, all the existing users will have their credit.

Related:


Jazoon 2012: Spring Data JPA – Repositories done right

4. July, 2012

Oliver Gierke presented “Spring Data JPA – Repositories done right” at the Jazoon. The motto of Spring Data could be “deleted code doesn’t contain bugs.” From the web site:

Spring Data makes it easier to build Spring-powered applications that use new data access technologies such as non-relational databases, map-reduce frameworks, and cloud based data services as well as provide improved support for relational database technologies.

Spring Data is an umbrella open source project which contains many subprojects that are specific to a given database. The projects are developed by working together with many of the companies and developers that are behind these exciting technologies.

When you use any form of JPA, you will eventually end up with DAOs which contain many boring methods: getById(), getByName(), getByWhatever(), save(), delete(). How do you like this implementation:

interface MyBaseRepository<T, ID extends Serializable> extends Repository<T, ID> {
  T findOne(ID id);
  T save(T entity);
}

interface UserRepository extends MyBaseRepository {
  User findByEmailAddress(EmailAddress emailAddress);
}

“Wait a minute,” I can hear you think, “these are just interfaces. Where is the implementation?”

That is the implementation. You can now inject those interfaces as DAOs and call the methods. Behind the scenes, Spring will generate a proxy for you that actually implements the methods. 0 lines of code for you to write for 95% of the basic DAO methods.

The queries can even be more complex:

List findByEmailAddressAndLastname(EmailAddress emailAddress, String lastname);

The method will generate SQL that searches by those two columns. See the documentation for more examples how you can write queries that use joins.

On top of that, they built a REST exporter which exposes your DAO interfaces with a REST API to a web browser plus a web front end to explore the repository, to run the queries and to create new objects. Impressive.


Jazoon 2012: Improving system development using traceability

4. July, 2012

When you develop a software, you will ask yourself these questions (quoted from here):

  • Is it still possible to accept a late change request? What would be the impact?
  • What is the overall level of completion of the system or a component?
  • Which components are ready for testing?
  • A failure occurs because the system is erroneous. What parts of the system should I check?

In his talk “Improving system development using traceability“, Ömer Gürsoy shows an approach to answer these. The idea is to trace changes end-to-end: From the idea over requirements to design, implementation, tests, bug reports and the product manual. For this to work, you’ll need to

  • Analyze
  • Document
  • Validate
  • Manage

At itemis, they developed tooling support. A plug-in for Eclipse can track changes in all kinds of sources (text documents, UML diagrams, requirement DSLs) and “keep them together”. It can answer questions like “who uses this piece of code?”

The answer will tell you where you need to look to estimate the impact of a change. That helps to avoid traps like underestimation or missing surveillance.

Today, the plug-in shows some promise but there are rough edges left. The main problem is integration with other tools. The plug-in supports extension points to add any kind of data source but that only helps if the data source is willing to share. The second problem is that it doesn’t support versioning right now. It’s on the feature list.

On the positive side, it can create dependencies from a piece of text (say a paragraph in a text file). If you edit other parts of the text file, the tool will make sure the dependency still points to the right part of the text. So you can make notes during a meeting. Afterwards, you can click on the paragraphs and link them to (new) requirements or parts of the code (like modules) that will be affected. Over time, a graph of dependencies will be created that helps you to keep track of everything that is related to some change and how it is related: Where did the request come from? Which code was changed?

Always keep in mind that tracking everything isn’t possible – it would simply too expensive today. But you can track your most important or most dangerous changes. That would give you the most bang for the buck. To do that, you must know what you must track and why.

A feature that I’d like to see is automatic discovery. Especially Java source code should be easy to analyze for dependencies.


Jazoon 2012: Software product creation

4. July, 2012

As we all know, there is no surefire way to get rich. But in his talk “Software product creation”, Robert Brazile bombed the audience with lots of useful hints how to shape the odds in your favor when you try to sell software. His focus was on the “product manager.” A product manager (PM) is the interface between customers, developers and sales. She must know:

  • Which features are important
  • And how important
  • To which customer
  • How long they will take to develop
  • What’s really going to be in the next version
  • What to tell sales
  • What to tell the customer
  • Are you selling a product or a feature?
  • The roadmap

The core tasks of a PM are:

  • Gathering data (bug reports, development points, sales figures)
  • Manage the vision for the product
  • Saying “No” to sales (because they always want to promise everything)

Since the PM is so important, one could assume she has a lot of power. This is true but all the power is indirect. She can’t control the customer. She can’t force sales to tell what she wants them to. She can tell the developers what should be in the next version but that doesn’t mean she’ll get it.

In some ways, she is also like an on-site customer: She must know exactly what each customer needs most dearly and what was promised to them. She knows all the requirements and has a clear idea how to evolve the product in the future.

She’s not sales, but must have know exactly why customers should by the product.

When supporting sales in a meeting with the customer, always undersell and over-deliver. Sales tends to promise everything in the kitchen sink (that’s what they are paid for). As PM, it’s your responsibility to build trust. Only promise features that you are 100% sure will be in the next version. There will be a lot of pressure to say “yes” (customer expectations, accusations by sales that “you’re not a team player”). Always remember: It’s hard to build trust but easy to blow it. There is no blame for being early but a lot of negative consequences for being late.

The main tools of a PM are data (a.k.a facts) and persuasion. The aim is knowledge not solutions. She knows what everyone wants but the teams must provide solutions.

To help with the process, she must aggregate the data into consumable chunks. Just don’t aggregate too much.

“Pricing is an art,” says Robert. In some cases, a higher price can make it easier to sell because cheap means “no value” in the customer’s brain. You might feel compelled to split the product so customers can pay only for what they need. On the other hand, the product should work out of the box – meaning it should have all the features, or customers might feel cheated “why do I have to pay so much for one little extra feature?”

When it comes to selling, faster almost always wins. The generation MTV can’t wait.

When it comes to features, don’t be a victim of your last conversation. Always keep the big picture in mind. Yes, some features are an incredible great idea but do they really fit into your long-term strategy? What do you win if you sell more for a few months just to end up with a product that you can’t evolve anymore?

Biggest Mistakes

  • Telling people what they want to hear
  • No Plan B. Some things will go wrong. Always consider worst cases and risks.
  • Ignoring technical debt and focusing on features only

Jazoon 2012: Agile Chartering: Energize Every Project Liftoff

4. July, 2012

In her talk “Agile Chartering: Energize Every Project Liftoff,” Diana Larsen presented approaches how you can set up your agile projects. Why is that important? When a rocket is launched into space, a lot of preparation happens to make sure the move from ground to space is smooth and successful.

Software projects often ignore this important step.

For example, it would make sense to check the commitment of team members. Commitment comes in two flavors:

  1. Yes, I want to do this
  2. … with the other members of my team

Another important question that each team member will ponder is WIIFM – What’s in it for me? Answers to these questions will have a huge impact on the success of a project.

Regulations are important but don’t forget that the human brain has a limited capacity. If you want them to follow the rules, you must restrict them to five tops.

Member Shields

Another strategy is to create “member shields” where each member writes their name on top of a shield like shape. The shield is then separated into four quadrants:

  1. Which skills to I bring into the team?
  2. What do I need to be successful in the team?
  3. What’s in it for me?
  4. Something personal. No dark secrets, just something that turns you into a person.

Write a motto below the shield.

Put those in a place where every team member can see them.

Context

Make sure that the team members know where the team fits into the organization. Post a 10’000 feet view of the company somewhere.

Risks

Agile development is all about risk management: Notice them, rate them, discuss them, act on them.

Good places to look for risks: Team boundaries and interactions: Who depends on the team’s work? On whom does the team depend? Does the team have everything it needs?

What does the team know about the future? What do we not know? What are opportunities and threats?

Remember the PAC triangle: Purpose – Alignment – Context. Every move of one corner influences the other two as well.

Also a lot of risks have their roots in VUCA:  volatility, uncertainty, complexity and ambiguity.

Related:


Suspend Fail in openSUSE 12.1 After Upgrading KDE

29. June, 2012

When you upgrade openSUSE 12.1’s KDE 4.7 to 4.8 (using this repo), suspend to disk or ram might stop working. If so, you’ve encountered bug 758379:  STR (Suspend to RAM) fails when NetworkManager running and NFS shares mounted

The description is a bit misleading. It also happens for suspend to disk (STD) and when you don’t use NetworkManager.

Workaround: Unmount your NFS shares before you try to suspend:

sudo umount -t nfs -a

If you use NFS v4, then the command is:

sudo umount -t nfs4 -a

To check whether it worked, use this:

mount | grep nfs

This shouldn’t print anything with “type nfs” anymore. Afterwards, suspend should work.