While any good developer makes sure that his code is healthy all the time *cough*, tools can be a lot of help.
Venkatt Guhesan has compiled a list of 11 tools that you should know about.
While any good developer makes sure that his code is healthy all the time *cough*, tools can be a lot of help.
Venkatt Guhesan has compiled a list of 11 tools that you should know about.
Jim Bird has written a great post about reasons to fix bugs and reasons to leave bugs alone: Zero Bug Tolerance Intolerance
A lot of languages compete for the king’s seat taken by Java. Most of them solve a lot of problem with Java but none of them really takes the win. As I say: “Why is there more than one database? Because they all suck.”
Now Ceylon enters the stage (slides from the presentation). The main goal is to clean up the SDK while keeping an eye on what was good and what was bad with Java.
I’ve had my share with programming languages. On a scale between 1 and 10 (best), Python gets 9 from me. Java gets 6. Scala gets 5.
So how does Ceylon fare? At first glance, I’d give it a 7.
Pros:
Cons:
Things that leave me puzzled:
Serge Beauchamp wrote a tool to automatically locate and report places where they can occur: Freescale’s Deadlock Preventer is now released!
Details can be found in this blog post.
I just read another great post on Daily WTF (“Boolean Illogic“). Question: Does validation occur when the status is valid?
if (statusIsNotValid.compareTo( Boolean.FALSE ) != 0) skipValidation = false;
Another great example why you should prefer positive logic.
When you start using dependency injection (DI), you probably come from the painful world of singletons. Singletons are a lie. When we were doing structured programming (remember? What we did before OO?), that was called “global variable” and everyone knew they were bad. But hey, OO came along and we had the same problem and to solve it … we used global variables. Only we didn’t name them that. We said “It’s a Singleton!” and everybody was happy.
Except that the mighty singleton has the same problems as the global variable – because they are the same thing.
A solution was sought and DI was invented. When people start to use DI, they are still in the “Singleton” mind because you can’t get rid of an idea that has served you (more or less) well over many years. Since a human can’t simply forget what he’s been doing for a long time (it’s traditional), singletons leaked into DI leading to odd design which felt wrong.
Software developers are paid for their brains. If something feels wrong, it usually is. Most of the early code we come up with then starting with DI violates the Law of Demeter.
A common solution to the problems with many singletons is to replace them with a single singleton (for example one which loads and offers the application context in Spring). While this is convenient, we still have a global variable left.
Another solution is to write constructors that take 27 parameters so you can pass in all the parameters. If you avoid that trap, then your class will have 27 setters. Holy Ugly, Batman.
How to solve that? Use more DI. Most of the 27 ex-singletons will be passed on to other worker classes. So instead of passing on the singletons to create the worker classes deep down in the code, create the worker classes using DI (so the DI framework can fill in all the ex-singletons they need) and then pass in the 2-3 workers.
For some code, see this article: Dependency Injection Myth: Reference Passing
I just stumbled over “The Rise of “Worse is Better”“. The article deals with the “get it right the first time” and the “get it as right as possible” dilemma. In Software development, you often have a situation where you don’t know enough for “get it right 100%” and you don’t have the time to learn. Or “get it right 100%” just isn’t possible.
In the end, “do it as good as you can” is, all things considered, better than the alternative. Or as Bill Gates allegedly said: “Windows doesn’t contain any bugs which any big number of users wants to have fixed.”
Which explains nicely why programming languages which strive for perfectionism (like Lisp) never really caught on. There are just too few perfectionists – and it’s a recessive trait.
Disclaimer: IANAL
In his post about EPL, GPL and Eclipse plugins (“EPL/GPL Commentary“), Mike Milinkovich says:
What is clear, however, is that it is not possible to link a GPL-licensed plug-in to an EPL-licensed code and distribute the result. Any GPL-licensed plug-in would have to be distributed independently and combined with the Eclipse platform by an end user.
Which is probably true because of the incompatible goals of the two licenses: The EPL was designed by companies, which make a lot of money with software, to protect the investments in the source code they contribute to an OSS project. Notice “a lot of money.”
The GPL was designed to make sure companies can’t steal from poor OSS developers and sell a product as their own or take some source code, add a few lines of code and then sell it as their own, etc. The GPL, unlike the EPL, is made as a sword to keep people away who don’t want to share their word under the GPL.
As such, both licenses work as designed and they are incompatible because their goals are incompatible. We as OSS developers can whine and complain that there is no legal way to build an Eclipse plugin for Subversion without first creating an Subversion client which is EPL licensed but that doesn’t change the fact that it is illegal. It’s the price we pay for the freedom we have. If the licenses were different, there would be legal loopholes.
Yes, it sucks.
In his blog, stephan writes about the problems you can have as a bug reporter. Basically, when you encounter a bug, you’re in the middle of something that you need to get done. You don’t have time to analyze the bug, collect all the information that might be around, note it down and write a good bug report.
Instead you need to get your job done. Then, later (whenever that might be … tomorrow or in a week or next year), you can worry about the bug. Anyone wondering why bug reports are often so bad?
But there might be a pretty simple solution: Java already can dump its heap (all objects) to a file. So what we need is a way to start this dump and add a screenshot plus a short description to it. This gets stored somewhere and when we’re done with our current task, we can return to the problem, analyze it more deeply or just zip everything up and post it as a raw bug report.
Luckily Eclipse is OSS (a.k.a “Nothing is impossible if you don’t have to do it yourself.”) See Bug 304544.
There is a nice series of articles on IBM’s developerworks by Neal Ford which talks about software design and how modern languages help to come up with a clear and cost-efficient design. To get a grasp why this is important, I like this quote:
Building software isn’t like digging a ditch. If you make compromises when you dig a ditch, you just get uneven width or unequal depth. Today’s flawed ditch doesn’t prevent you from digging a good ditch tomorrow. But the software you build today is the foundation for what you build tomorrow. Compromises made now for the sake of expediency cause entropy to build up in your software. In the book The Pragmatic Programmer, Andy Hunt and Dave Thomas talk about entropy in software and why it has such a detrimental effect (…). Entropy is a measure of complexity, and if you add complexity now because of a just-in-time solution to a problem, you must pay some price for that for the remaining life of the project.
Any software developer should be familiar with the concept of entropy and how it affects their lives.
In a later installment, Neal shows some reasons how modern languages allow to implement many of the design patterns by the GoF much more naturally with Groovy.