Worse-is-better

18. May, 2010

I just stumbled over “The Rise of “Worse is Better”“. The article deals with the “get it right the first time” and the “get it as right as possible” dilemma. In Software development, you often have a situation where you don’t know enough for “get it right 100%” and you don’t have the time to learn. Or “get it right 100%” just isn’t possible.

In the end, “do it as good as you can” is, all things considered, better than the alternative. Or as Bill Gates allegedly said: “Windows doesn’t contain any bugs which any big number of users wants to have fixed.”

Which explains nicely why programming languages which strive for perfectionism (like Lisp) never really caught on. There are just too few perfectionists – and it’s a recessive trait.


env.js has a new website

17. May, 2010

env.js has a new website at http://www.envjs.com/.

Just in case you don’t know why you need it:

Envjs is a simulated browser environment written in javascript.

If you still don’t know: You can use env.js to simulate a web browser in unit tests (among other things).


Uploaded first snapshot of ePen 0.9

15. May, 2010

I’ve just uploaded the first snapshot of the 0.9 prerelease on Sourceforge. Changes since 0.8:

  • Make text links in translations work
  • HTML: Use outline title in changelog
  • HTML layout. Index section in scene files needs some distance from the window border
  • Remember the editing position and open the same file with the cursor at the same place when the project is opened again
  • Editor for feedback page
  • HTML: Scroll mini-toc to the current scene
  • Going back and forward in the editing history will now scroll the text and place the cursor where it has been.

You can find it in the download area of the project.


ePen 0.8.0 released

6. May, 2010

I’ve just uploaded the ePen 0.8.0 release on SourceForge.net. You can find it in the download area of the project.

To install, follow the instructions in the README.txt (either in the archive or in the download area).

To get an idea what the software can do, see my story “Haul” (auch in Deutsch verfügbar).


Useful documentation

26. April, 2010

Documentation is one of the last unconquered areas in software development. The tools tend to get better but the documentation … well, let’s say it could be better. Why?

Part of the problem is that the documentation is written by the developers but the main reason is that they ask the wrong question. They ask “What does it do?” when they should ask “When would I use this?

Let’s have a look at an example. The OpenOffice help says for “Save”: “Click on Save or press the key combination Ctrl + S. The document will be saved … A file with the same name and path will be overwritten.”

That explains what the function does but not why anyone would use it. How about this: “Save allows you to store the current state of your work as a file in your file system. When you exit the application and start it again, you can load the file and continue where you left off.”

Aha! How about “Copy”?

“Copies the current selection to the clipboard” is boring to write (because it’s so obvious) and doesn’t answer the user’s question. Maybe the boring feeling when you write documentation like that is the nagging suspicion “this is futile”?

Try “The Copy operation allows you to save the current selection in the system’s clipboard from where you can use Paste to insert it in a different place in the same document, in a different document, or even another application if it can do anything useful with the current selection.”

Printing: “Print allows you to send the document to any installed printer. If you install a PDF Converter as a printer, you can make your document available to people who could otherwise not access your work digitally.” Suddenly, the help offers strategies and options.

So next time you document something, ask the right question.


The future of data

23. April, 2010

RFid Data Table from a BBC exhibition (CC: by-nc-nd)

People love to share. They share emotions, affection, information, files and personal data. But they don’t want to share that with everyone. Imagine sharing your bank statements with the IRS. Or that you just bought a very expensive TV set with a burglar. Or that you’re not at home for the next four weeks. Or photos and films of your children with a pedophile.

While people don’t talk about their private life in a public form, they do post it in social networks. They don’t want anyone to have a look at the data on their harddisks but they backup the very same data with online backup services. The line between private and the web is blurring.

Unfortunately, data can’t protect itself, so as soon as you put something online, anyone can see it, copy it, give it to someone else or keep a copy even after you deleted it yourself. The Internet doesn’t forget.

So the obvious solution is that data must become active. It must check who has permission to access it and only reveal its details to people who you have permission. How would that work?

Let’s have a look at ssh. At work, I’m accessing a server and work with an account but I have no idea what the password for that account is. How do I login? With my own credentials. I give a public key to the system administrator and he adds me to the list of people who can login. If he doesn’t want me anymore, he deletes the key from the list and I loose access. He doesn’t know my password and I doesn’t know his.

To achieve the same with data, the data must be encrypted. To decrypt it, users must ask a server for the decrypt key and identify themselves with their public key.

Of course, there are a couple of issues with this approach:

  1. First of all, it will bloat the data and make the processing (much) slower. Well, that might be an issue today but soon, progress will solve that.
  2. Users could decrypt the data once and then keep a decrypted copy. While this is true, is it an issue? First of all, these people had once access, so it will only become an issue if we want to revoke the access. Also, if they don’t backup the data regularly, a hardware failure will solve the problem sooner or later.

    Lastly, we could attach a license to data which disallows to share the decrypted copy with anyone. If anyone did, they could be sued for the license infringement. And let us not forget that most people won’t understand how this all works, so they won’t be able to do it. Plus as long as it works and it comfortable enough to use, they won’t see a reason to do it.

    For those who do understand the technology or want to abuse it, no amount of protection will be enough to stop them. This is why we have laws and courts.

  3. People could loose their data or their password. Happens all the time. But wouldn’t this approach solve both these issues? If all data was encrypted and there were servers to distribute credentials, people would have to remember just a single password for all services. The password could be strong and it could be changed with ease. Web sites could add users based on their public keys (just like ssh or OpenID). And there would be no need to worry about losing data since you could back it up with an online backup service since the encryption would happen before it is backed up.

Comments?


Automatically fix computer problems

20. April, 2010

Microsoft announced the availability of a tool called “Fix-it“. The idea is that many computer users have similar problems and they can be fixed with a simple script (which downloads and installed a bugfix of a hardware driver, cleans a registry key, etc).

Right now, even if you find the issue in the knowledge base, you have to do this manually. So a program will help many people to solve the issues themselves. Also, the scripts can look for problems which the user didn’t even notice, yet.

I think this would be a great idea for Linux, too. For example, if you try to debug a network problem, the steps are always the same: Check the output of ifconfig, ping a couple of known servers, try to telnet somewhere. The script could ask for these values and save them. When you have network trouble again, it could run the tests automatically, examine the values and give suggestions.


Using Mercurial with Dropbox

17. April, 2010

If you want to take a Mercurial repository with you, you have several options:

  1. Create a server somewhere. Don’t forget to install all the security patches.
  2. Use an USB stick. Don’t forget it somewhere (like at home) and don’t forget to always push your changes onto it.
  3. Use Dropbox

Dropbox is a file server in the cloud. While they swear your data is save (“All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password.” – see the features), it’s better to be safe than sorry. Also, Dropbox can’t really cope with the fast changes to the virtual filesystem done by Mercurial (this will lead to corrupt repositories and missing changesets).

The solution is to create a TrueCrypt container in your Dropbox. Dropbox won’t be able to see any changes as long as the container is mounted. When you dismount the container, Dropbox will check the file for changes (if you write to the container, TrueCrypt just modifies a few sectors). So even if you create a 100MB container, only the initial sync will be slow.

There are few obstacles, though:

  1. You must remember to mount the container, and push your changes into it.
  2. If you forget to dismount and push changes into the container on a different computer, you’ll see two containers. In this case, mount the second container somewhere, merge the changes using Mercurial and then commit to the original container.
  3. You must install TrueCrypt and Dropbox on all computers where you want to use this.
  4. The cycle “mount-push-dismount” becomes tedious over time.
  5. If you use HgEclipse, the plug-in will forget the local paths if you forget to mount the container before you start Eclipse.

The OSS dilemma

7. April, 2010

Disclaimer: IANAL

In his post about EPL, GPL and Eclipse plugins (“EPL/GPL Commentary“), Mike Milinkovich says:

What is clear, however, is that it is not possible to link a GPL-licensed plug-in to an EPL-licensed code and distribute the result. Any GPL-licensed plug-in would have to be distributed independently and combined with the Eclipse platform by an end user.

Which is probably true because of the incompatible goals of the two licenses: The EPL was designed by companies, which make a lot of money with software, to protect the investments in the source code they contribute to an OSS project. Notice “a lot of money.”

The GPL was designed to make sure companies can’t steal from poor OSS developers and sell a product as their own or take some source code, add a few lines of code and then sell it as their own, etc. The GPL, unlike the EPL, is made as a sword to keep people away who don’t want to share their word under the GPL.

As such, both licenses work as designed and they are incompatible because their goals are incompatible. We as OSS developers can whine and complain that there is no legal way to build an Eclipse plugin for Subversion without first creating an Subversion client which is EPL licensed but that doesn’t change the fact that it is illegal. It’s the price we pay for the freedom we have. If the licenses were different, there would be legal loopholes.

Yes, it sucks.


Thoughts on agile requirements engineering

24. March, 2010

There is an interesting German article about agile RE: Gedanken über agiles Requirements Engineering