Never Rewrite From Scratch

24. April, 2024

In many projects, there is code which is so bad that no one wants to touch it. Eventually, the consensus is “we need to rewrite this from scratch”.

TL&DR: Never do this in one big step. Break it down into 1-4 hour work pieces. Do each on the side while doing your normal work. Start today, not when you need to add a feature to or fix a bug in the messy code.

How to Make Rewrites Work

The goals what we need to achieve:

  • Doing feature work during the rewrite must be easy.
  • The new code must be substantially better.
  • It would be great if we could stop halfway. Not that we have to, but the choice would be valuable.

If you think about this from a customer perspective: They pay you for (working) features, not for keeping your work area clean. The latter is a given. How would you feel if your favorite restaurant would put “Cleaning spillover: $50” on your next bill?

“Stop the world” rewrites should be avoided. They are also incredibly dangerous. Imagine, you estimate the rewrite takes two weeks. After two weeks, you notice you need more time. How much? Well … one week … maybe? You get another week. During which you find something unexpected which gives you an idea why the code was so messy in the first place. What now? Ask for a month more? Or admit defeat? Imagine how the meeting will go where you explain to your manager the situation. You have already spent 15 days on this. If you stop now, those will be wasted. Your lifetime will be wasted. And on top of this, you will still have the bad code. You will have spent a lot of money and gained nothing at all.

That’s why the rewrite must have a manageable impact on the productiveness of the team. Not negligible, but we must always stay in control on how we spend our effort.

The easiest and most reliable ways to make code better is to cut it into smaller pieces and to add unit tests on the way. A few lines of code are always easier to understand than a thousand. Tests document our expectations in ways that nothing else can.

Example

You have this god class that loads data from several database tables, processes it and then pumps the output into many other tables and external services.

Find the code which processes the data and move it into a new class. All fields become constructor parameters, all local variables become method parameters. You now have a simpler god class and a transformer. Write one or two unit tests for the transformer. Make sure you don’t need a database connection – if the transformer fetches more data, move that into a new helper and pass it as constructor parameter. Or load all the data in advance and pass a map into the transformer.

Let’s see what we did in more abstract terms.

How to rewrite bad code

In general, find an isolated area of functionality (configuration, extracting data, validation, transforming or loading into destination) and cut it out with the least amount of changes elsewhere. Ideally, you should use your IDE’s refactoring “move to new method in a new class.”

What have we achieved:

  • We cut the complicated mess into two pieces, one much easier to understand than before.
  • We have invested just a little bit of time.
  • We now understand the mess a bit better.
  • The remaining mess is less code, making it easier to deal with in the future.
  • We now have more code under test than before.
  • The extracted code is much, much easier and faster to test than the old mess.
  • When there is a bug in the extracted code, we are now much more efficient to write a test for it and fix it.
  • There is only a small risk that we introduced new bugs or even none when we used a refactoring.
  • We can merge this into the shared/production branch right away. No need for a long-living rewrite branch.
  • If it is valuable, we can add more tests. But we don’t have to. This gives us options.

Rinse & repeat. After a few times of this, you will begin to understand why the old code is messy. But at that time, you will have moved all the irrelevant stuff to unit tested code. So fixing the core of the mess will be much easier because now,

  • it’s much less code to deal with,
  • a lot of complexity will be gone,
  • many unit tests have your back, and
  • you will have begun to understand this code despite it’s lack of good documentation and it’s messiness.

The best parts:

  • No additional stress when doing it this way.
  • If you make a mistake, you won’t have wasted a lot of the client’s money AND your teams lifetime AND your reputation AND you can easily revert it.
  • You can stop at any time – there is never a “if we stop now, it’s all for naught” situation.
  • Several people can do this in parallel with just a little bit of coordination.
  • Management and customers are fine with tidying up for a few hours every week.
  • If something else is more important, you can switch focus. If the customer needs a new feature here, you can spend more time extracting the least messy stuff because it will make you more efficient. Or not. Again, you get a choice that you didn’t have before.
  • You now have something valuable you can work on when you’re blocked for one hour.
  • You get to fix it (eventually).
  • At the end, everyone will be happy.

It just takes a bit longer from start to finish. Treat yourself to a bag of sweets when the mess is good enough. You deserve it.

If you’re a manager: Organize lunch/dinner for the whole team. They deserve it.

Next, let us look at whether we should do this at all.

Why We Should Rewrite

Rewrites don’t bring immediate business value. From a developer perspective, the rewrite is absolutely necessary. From a customer perspective, it’s a waste of time: “Spend my money to get the same functionality?”

Cleanliness

So let’s think about this as cleanliness. Everyone has to clean their room, desk, body once in a while. You don’t have to shower three times a day. But once a week is definitely not enough. So there is a sweet spot between several times a day and once per week.

Why? How? When?

Why do we do it? Because dirt accumulates over time and if you don’t tidy up regularly, the cost of tiding up suddenly explodes PLUS you won’t be able to do your work efficiently.

How long do we do it? You shower for 10 – 30 minutes, not a whole day. So spend at most 10% of your working time on this. I find that half an hour every day is working great.

When do we start? Now. Really. Right now. Don’t let the dirt evolve into cancer and kill you. Or in software terms: Which one is better?

  1. You start improving the worst part of your code base today. After a few fixing rounds, you get a high priority bug/feature which has to be fixed/implemented right now
  2. You get a high priority bug/feature which has to be fixed/implemented right now. It’s in the worst part of the code base. You haven’t done any tidying there, yet.

The lesson here is: Tidy up regularly and in short bursts. Focus on where you expect changes in the near future over code that hasn’t changed for a long time.

Looking at software in general: Spending several consecutive days on cleanup is too much. Going without cleanup for a week and the software will soon starts to reek. Aim for at least one hour per week and at most four hours.

Least Messy

Apply this to rewrites: Today, you have a huge mess. Find a part in it that is least “messy” and fix that. Some time later, depending on your workload, do it again.

I define “least messy” like this:

  • Most independent – changes here will affect the least amount of code.
  • Easiest to understand – try to find something that does one thing only.
  • Can be extracted within one to four hours of work, including at least one new unit test.

We now know how to rewrite and when to do it. But one question is still open: What makes rewrites fail?

Why Rewrite From Scratch Fails

Many developers believe that rewrite from scratch is the only possible solution for really bad code. More importantly, they think they can fix it within a certain – usually low – time budget. We have the code to guide us, right? Should be easier second time around, right? With all that we’ve learned since?

Usually not. You don’t understand the code: this is the main reason why you want to scrap it! It has hidden issues which drive the “badness”. You don’t have a reliable list of all features. Lastly, you either have to lie to your manager about the effort or you won’t get permission.

Let’s look at each of those in more detail.

Bad code is hard to understand

The first reason why you want to rewrite from scratch is that you don’t understand the bad code. This makes it harder to change and drives your urge to get rid of it.

For the same reason, it will also slow you down during the rewrite. The rule of thumb with bad code is: “If it took N hours to write the first time, it will take ~N hours to rewrite from scratch”. This only applies when

  • you have a competent team,
  • everyone involved in writing the bad code is still there.

You can do better, if

  • all the requirements that ever went into this code are readily available,
  • there is good documentation,
  • not much time has passed since the time the mess was created.

But usually, the messy code is in this state because the first two were missing the first time around and the latter isn’t true since no one dared to touch the code because it always caused problems.

For these reasons, the bad code will slow you down instead of helping you during the rewrite. But that’s not all.

Hidden Design Issues

Why is the code so bad? There is a reason for that. No matter how bad it looks today, it was written my smart, competent people just like you. What happened?

Often, it was written with assumptions that turned out to be somewhat wrong. Not totally off target, just not spot on. Like how complex the underlying problem is. The original design didn’t solve the problem in an efficient way. The code didn’t work well to begin with, time pressure built up, eventually the team had to move on. Code had to be made to work “somehow” to meet a deadline.

Do you understand today where you went wrong the first time? Do you know how to solve it, now? Without this, your attempt to rewrite will produce “bad code, version 2”. Or at best “slightly better code, way over budget”. In addition to those two, you might even know less today than the first time.

Lack of Information

The third reason is that you don’t have good documentation. The knowledge of most of the features will be lost or hidden in old bug/feature tickets and outdated wiki pages.

Since you can’t trust the code, you will have to painstakingly rebuild the knowledge that went into the first version without many of the information sources you had the first time. Many developers involved in the first version have left. Even if they are still around: The reasons for most of the decisions will be long forgotten by now.

Therefore information wise, you probably start worse off than when someone made this mess the first time. Which was some time ago. Bad code festered into an ugly mess of thousands of lines of code, workarounds and hasty bug fixes. How long will it take to clean this up?

Realistic Estimates

The last reason is that a rewrite takes longer than a few days. If this wasn’t the case, you’d have solved the problem already – no one argues about a rewrite that takes just a few hours.

Here, we have a psychological problem. No one knows how long it took to write the original code – it “evolved.” Maybe a month? Half a year?

Well, we know better this time, so it has to take less time. We do? Why? Okay, how much less? Well … this code is so bad, it hurts so much … it has to go! … it’s embarrassing to even talk about this … how much is management willing to … and you’re doomed. Instead of giving an honest estimate, you try to find a number that will green-light the attempt. Or you give an honest estimate and management will (correctly) say “No.”

Challenges

You will face many challenges. I’ve listed suggestions how to handle them.

My boss/client won’t let me

Argue that you need time to clean your work area, just like a carpenter needs to sweep the chips from the the floor between projects. When too much dirt accumulates, you can’t work quickly or safely. Which means new features will either be more expensive or they will have more bugs.

We don’t have time for this!

One picture says more then a thousand words in this case: https://hakanforss.wordpress.com/2014/03/10/are-you-too-busy-to-improve/

It’s so bad, we can’t fix individual parts of it!

Well, let me know how it went.

For everyone else: This is in production. So it can’t be that bad. As in it’s not killing you right now. It’s just very painful and risky to make changes there. Despite how bad the whole is, a lot of thought and effort went into the individual changes. Often more than elsewhere because extra care was taken since this was a dangerous area. This also means that it would be a terrible waste to throw everything away just because it looks a like huge reeking dump of garbage from a distance. You know how they fix oil spills? The put a barrier up and then, it’s one dirty bird / beach at a time.

So look at the messy code. Try to see what you can salvage today. Keep all the good stuff that you can reuse. Clean it. Keep chipping away at the huge pile. Move carefully so it can’t come crashing down. As your knowledge grows, the remaining work shrinks. Eventually, you will be able to replace whole code paths. And one day, guaranteed, the huge pile will become a molehill that you can either stomp into the ground with the heel of your boot or … ignore.

While I’m at it, let me just fix this as well!

You will often feel the urge to go on cleaning after you started. Just one more warning. Oh, and I can extract this, now! And I know how to write five more unit tests.

Set a time limit and learn to stick to it. If you have more ideas how to improve things, write them down. A comment in the code works well since someone else might pick it up. If you clean code for three days, other people won’t praise you. Imagine it the other way around: There are so many important things to do right now and your colleague just spent three days cleaning up compiler warnings?

Also, remember the 80:20 rule: Most clean ups will only take a bit of time. As soon as you get in the “hard to fix” area, you’re spending more and more effort. Eventually, the clean up will cost more than you’ll ever benefit from it. Keeping it time boxed will prevent you from falling into this trap.

I don’t have time to write a unit test

Come back when you have. Adding tests is an important part of the work. It’s like a carpenter sweeping the chips under a rug. You need this test. Because …

Writing the unit test takes ages

Excellent! You have found a way to measure whether you’re doing it right or wrong. If writing the new unit test is hard, there is a problem that you don’t understand yet. The code you extracted has turned out to be much more dangerous than you thought. Great! Close your eyes and focus on that feeling. Learn to recognize it as early as possible. This emotion will become one of the most valuable tools in your career. Whenever you feel it, stop immediately. Get up. Get a coffee. Stare at the wall. Ask yourself “Why am I feeling this? Which mistake am I about to make?”

Now let’s look at reasons why the unit test is so hard to write.

The unit test needs a lot of setup

This indicates that you have an integration test, not a unit test. You probably failed to locate what I called “least messy” above. Document your findings and revert. Try to find a part to extract that has fewer dependencies.

The unit test needs complicated data structures

Looks like you need to improve the design of the data model. Check how you can make the different data classes more independent of each other. For example, if you want to write tests for the address of an invoice, you shouldn’t need order items. If improving your data model will make it more efficient to write the tests, stop the tidying here and clean the data model instead.

Option #2: Consider creating a test fixture with test data builders for your model classes. The builders should produce standard test cases. In your tests, you create the builder, then modify just the fields that your test needs and call build() to get a valid instance of your complex model.

Writing the unit test fails for another reason

Write a comment in a text editor what you tried and why it failed. Include all useful information including class names and stack traces. Revert your changes. Commit the comment.

You really can’t achieve much more here. Stop for now, do a feature, and resume tidying tomorrow. If you have an idea then how to improve this code: Do it. If not, tidy up elsewhere.

I can’t find anything to extract

Try to extract fewer lines of code. Sometimes, extracting a single line into a method with a good name helps tremendously understanding complex code. This is counter intuitive: How can turning one line into four make the code easier to understand? Because how the brain works: You brain doesn’t read characters, it looks for indentation. Reading a good method name is faster and more efficient than running a 80 character expression in your head.

Next, sort each code line into “fetching data from somewhere”, “transforming the data” and “loading the data into something”. For methods that mix two or three of those, try to split the method into two or three methods or classes where each does just one thing.

Conclusion

The net effect of the above is that software developers tend to underestimate rewrites. The result: the rewrite costs much more than expected. Management is unhappy: “just can’t trust the estimates of developers”. Developers are unhappy: “management will not allow us to do this again” and “I put so much effort into this and everyone hates me for it”. The customer is very, very unhappy (“I paid how much to get what I already have??? And what about … ?? You postponed it? I needed that this week! Do you have any idea how much revenue I lost …”).

So the only solution which will work most of the time:

  • Cut the huge rewrite into small, manageable parts.
  • Each part should slightly improve the situation.
  • Add at least one unit test to each improved part.
  • Spend a small amount of your weekly work time on tidying.
  • Merge quickly.
  • Start today.

See Also


Solar Power from Space is a Scam

4. October, 2023

There will never be orbital PV plant that beams down any substantial amount of energy from space down to Earth.

  1. Let’s assume for a moment that you could collect 1 GW of power with the plant in orbit. Who do you think will allow any state on this planet to put a 1 GW microwave gun into orbit where it can target almost any place on Earth?
  2. Even if you trust your government, computers will control the process. How long will it take for hackers to nuke a city by redirecting the beam?
  3. A 1 GW plant is a joke; China alone had 392 GW installed at the end of 2022.
  4. Getting the material into orbit will be very expensive, even when SpaceX brings down the price some more. For the same price, you will be able to build a power plant on the ground that is several times bigger. So it’s not economical.
  5. The beam will target a receptor on the ground. If you live far away from that place, no power for you.
  6. This receptor will be huge (10 km diameter or more). If you hate huge, ugly structures like PV plants on the ground, this won’t work for you, either.
  7. Space in space is more limited than you think.
    • If you put the solar panels in an orbit around Earth, they will sometimes be in the shadows of Earth or the Moon (and not working during those times) OR
    • you need to give them engines (which need fuel that you have to send up for $$$) plus the structure will need to be much more sturdy (= $$) OR
    • you must put it at one of the five Lagrange Points where it will take up so much space that you can’t do anything else there AND that means the ground station will rotate away from you so you can’t send energy down all the time; you could send the energy to several ground stations but those would have to be in different countries. See also: point 5.
  8. Point 6 means that we can only put up a few plants, maybe only a single one. So every other nation will object.
  9. Beaming energy down means converting the electric energy to microwaves. You have pretty big losses when that happens and you need to do the opposite on the ground, so you lose not x% but 2 * x% of the collected power. Which means you can again build a bigger plant on the ground for the same money.
  10. PV power plants need maintenance, even in space. So you need a crew. Which means living quarters, life support, lots of rockets going there with supplies and fresh crew. This will be very expensive, if if you pay the minimum wage of … probably $100 / hour? You need trained astronauts for this or you’ll risk that your plant will be broken a lot of times because the crew killed itself.
  11. This thing will be huge (maybe 10 square kilometers). No one has ever built a structure of this size in space. It’s not technically impossible, just expensive. There will be several failed attempts which someone will have to dismantle before you can start again because you simply can’t afford a 1 km2 piece of junk floating next to your plant waiting to crash into something important.
  12. If this structure collapses, it will generate a huge cloud of debris, possibly triggering the Kessler syndrome
  13. The atmosphere is reducing the power of PV on the ground a bit but it’s just a few percent. The biggest problem on the ground is dust and rain. That’s the only feature where a space based PV plant is better.
  14. After a few years, you will have to decommission the plant. That means many rockets to bring all the junk back down to Earth. It will also leave a lot of small junk behind which will poison the space for anything else for centuries.

In the end, the small advantages don’t make up the for the huge costs, military security risks. It’s much cheaper to build huge PV plants on the ground, using cheap labor and materials which can be delivered by truck.

That’s why I’m convinced an orbital PV plant is a nice dream but it will stay a dream. It’s okay as a thought experiment but it can’t solve any pressing or important problems. There are solutions that are much more realistic and cheaper, for example PV plants in deserts where you plant grass in the shadow of the panels to cool the panels and turn the desert green again.

Yeah, they are a bit ugly on the ground but if that is a killer for you, either stop using electricity or live with it. No orbital PV plant will make a noticable difference here during your entire lifetime.

See also: Space-based solar power


The one thing we can’t produce at industrial scale

14. May, 2023

Silence.


Chained Unit Tests – CUT

29. March, 2023

The CUT approach allows to test logically related parts or to gradually replace integration tests with pure unit tests.

Let’s start with the usual app: There is a backend server with data and a frontend application. Logically speaking, those are connected but the backend is using a Java and the frontend uses TypeScript. At first glance, the only way to test this is to

  1. Set up a database with test data.
  2. Start a backend server.
  3. Configure the backend to talk to the database.
  4. Start the frontend.
  5. Configure the frontend to talk to the test backend.
  6. Write some code which executes an operation in the frontend to test the whole.

There are several problems with this:

  • If the operation changes the database, you sometimes have to undo this before you can run the next test. The usual example is a test which checks the rendering of a table of users and another test which creates a new user.
  • The test executes millions of lines of code. That means a lot of causes for failures which are totally unrelated to the test. The tests are flaky.
  • If something goes wrong, you need to analyze what happened. Unlike with unit tests, the problem can be in many places. This takes much more time than just checking the ~ 20 lines executed by a standard unit test.
  • It’s quite a lot of effort to make sure you can render the table of users.
  • It’s very slow.
  • Some unrelated changes can break these tests since they need the whole application.
  • Plus several more but we have enough for the moment.

CUT is an approach that can help here.

Step 1: Rendering in the Frontend

Locate the code which renders the table. Ideally, it should look like this:

  1. Fetch list of elements from backend using REST
  2. Render each element

Change this code in such a way that the fetching is done independent of the rendering. So if you have:

renderUsers() {
    const items = fetchUsers();
    return items.map((it) => renderUser(it));
}

replace that with this:

renderUsers() {
    const items = fetchUsers();
    return renderUserItems(items);
}
renderUserItems(items) {
     return items.map((it) => renderUser(it));
}

At first glance, this doesn’t look like an improvement. We have one more method. The key here is that you can now call the render method without fetching data via REST. Next:

  1. Start the test system.
  2. Use your browser to connect to the test system.
  3. Open the network tab.
  4. Open the users table in your browser.
  5. Copy the response of fetchUsers() into a JSON file.
  6. Write a test that loads the JSON and which calls renderUserItems().

This now gives you a unit test which works even without a running backend.

We have managed to cut the dependency between frontend and backend for this test. But soon, the test will give us a false result: The test database will change and the frontend test will run with outdated input.

Step 2: Keeping the test data up-to-date

We could use the steps above to update the test data every time the test database changes. But a) that would be boring, b) we might forget it, c) we might overlook that a change affects the test data, d) it’s tedious, repetitive manual work. Let’s automate this.

  1. Find the code which produces the JSON that fetchUsers() asks for.
  2. Write a unit test that connects to the test database, calls the code and compares the result with the JSON file in the frontend project.

This means we now have a test which fails when the JSON changes. So in theory, we can notice when we have to update the JSON file. There are some things that are not perfect, though:

  • If the test fails, you have to replace the content of the JSON file manually.
  • It needs a running test database.
  • The test needs to be able to find the JSON file which means it must know the path to the frontend project.

Step 2 a: Update the JSON file

There are several solutions to this:

  • Use an assertion that your IDE recognizes and which shows a diff when the test fails. That way, you can open the diff, check the changes, copy the new output, open the JSON file, paste the new content. A bit tedious but if you use keyboard shortcuts, it’s just a few key presses and it’s always the same procedure.
  • Add a flag (command line argument, System property, environment variable) which tells the test to overwrite the JSON when the test fails (or always, if you don’t care about wear&tear of your hardware). Since all your source code is under version control, you can check see the diff there and commit or revert.
    • Optional: If the file doesn’t exist, create it. This is a bit dangerous but very valuable when you have a REST endpoint with many parameters and you need lots of JSON files. That way, the first version gets created for you and you can always use the diff/copy/paste pattern.

You probably have concerns that mistakes could slip through when people mindlessly update the JSON without checking all the changes, especially when there are a lot.

In my experience, this doesn’t matter. For one, it will rarely happen.

If you have code reviews, then it should be caught there.

Next, you have the old version under version control, so you can always go back and fix the issue. Fixing it will be easy because you now have a unit test that shows you exactly what happens when you change the code.

Remember: Perfection is a vision, not a goal.

Step 2 b: Cut away the test database

Approaches to achieve this from cheapest to most expensive:

  • Fill the test database from CSV files. Try to load the CSV in your test instead of connecting to a real database.
  • Use an in-memory database for the test. Use the same scripts to set up the in-memory database as the real test database. Try to load only the data that you need.
    • If the two databases have slightly different syntax, load the production script and then patch the differences in the test to make the same script work for both.
  • Have a unit test that can create the whole test database. The test should verify the contents and dump the database in a form which can be loaded by the in-memory database.
  • Use a Docker image for the test database. The test can then run the image and destroy the container afterwards.

Step 2 c: Project organization

To make sure the backend tests can find the frontend files, you have many options:

  • Use a monorepo.
  • Make sure everyone checks out the two projects in the same folder and using the same names. Then, you can just go one up from the project root to find the other project.
  • Use an environment variable, System property or config file to specify the path. In the last case, make sure the name of the config file contains the username (Java: System property user.name) so every developer can have their own copy.

What else can you do?

There are several more things that you can add as needed:

  • Change fetchUsers() so you can get the URL it will try to fetch from. Put the URL into a JSON file. Load the JSON in the backend and make sure there is a REST endpoint which can handle this URL. That way, you can test the request and make sure the fetching code in the frontend keeps working.
  • If you do this for every REST endpoint, you can compare the list from the tests against the list of actual endpoints. That way, you can delete unused endpoints or find out which ones don’t have a test, yet.
  • You can create several URLs with different parameters to make sure the fetching code works in every case.

Conclusion

The CUT approach allows you to replace complex, slow and flaky integration tests with fast and stable unit tests. At first, it will feel weird to modify files of another project from a unit test or even trying to connect the two projects.

But there are several advantages which aren’t obvious:

  1. You now have test data for the default case. You can create more test cases by copying parts of the JSON, for example. This means you no longer have to keep all edge cases in your test database.
  2. This approach works without understanding what the code does and how it works. It’s purely mechanical. So it’s a great way to start writing tests for an unknown project.
  3. This can be added to existing projects with only small code changes. This is especially important when the code base has few or no tests since every change might break something.
  4. This is a cheap way to create test data for complex cases, for example by loading the JSON and then duplicating the rows to to trigger paging in the UI rendering. Or you can duplicate the rows and the randomize some fields to get more reasonable test data. Or you can replace some values to test cases like very long user names.
  5. It gives you a basis for real unit tests in the frontend. Just identify the different cases in the JSON and pick one example for each case. For example, if you have normal and admin users and they look different, then you need two tests. If there is special handling when the name is missing, add one more test for that. Either get the backend to create the fragments of the JSON for you or load the original JSON and then filter it. Make sure you fail the test when expected item is no longer in the list.
  6. The first test will be somewhat expensive to set up. But after that, it will be cheap to add more tests, for example for validation and error handling, empty results, etc.

Why chained unit test? Because they connect different things in a standard way like the links of a chain.

From a wider perspective, they allow to verify that two things will work together. We use the same approach routinely when we expect the compiler to verify that methods which we call exist and that the parameters are correct. CUT allows to do the same for other things:

  • Code and end user documentation.
  • Code and formulas in Excel files.
  • Validation code which should work exactly the same in frontend and backend.

How Quantum Entanglement Works – For Dummies

5. December, 2022

So you’ve heard about this “quantum entanglement” stuff and how Einstein was apparently worried it might break the speed of light.

It doesn’t and he wasn’t.

Here is the simple version: Take a piece of paper. Rip it apart once. Check the pieces that you got. Maybe scribble something on it. Mix the two pieces behind your back. Give one of them to a friend without looking. Send the friend to the end of the universe. If he refuses, find a real friend. Wait a few billion years. Open your hand. In that instant, no matter how far away your friend is, you will know what piece he has in his hands.

The complicated version: Quantum is weird but some stuff is actually easy to understand. There are just a few ways to create entangled particles. All of them have something in common: The pieces must add up exactly to what you put in. This isn’t magic or some badly written Star *beep* episode. Imagine you put in a piece of paper (or a photon – a blip of light). You can’t have more paper (or more light) after splitting it. In the case of the photon: If you add energy, you get more light but no entanglement.

So the entangled photon pairs are always half of the original in terms of energy (which roughly translates to “half as bright”). And they always go in exactly opposite directions (see conservation of momentum). Things like that. Which means you know everything about the two particles except one thing: You don’t know which is which unless you look.

If you keep one of them around and sent the other away, and then at some later point look at what you kept, you know exactly and instantly what the other must look like, no matter how far away it is now.

But you can’t change the far particle anymore. Same as you can’t add text to the paper which your friend at the edge of the universe is holding. So you can’t use this to beam information.

Sorry.

Be grateful for having friends like that. A pity that you sent him away.


When to put generated code under version control

29. June, 2022

Many people think that when a computer generates code, there is no point to put it under version control. In a nutshell: If you generate the code once with a tool that you’re confident with, there is no point to put under version control. If you need to tweak a lot, version control will make your life so much easier.

Decision tree:

  • Do you need to tweak the options the code generator until everything works? If so, then yes.
  • How confident are you with using the code generator? If not very, then yes.
  • Is the code generator mature? Then not.

Some background: Let’s compare a home-grown code generator which is still evolving with, say, the Java Compiler (which generates byte code). The latter is developed by experienced people, tested by big companies and used by thousands of people every day. If there is a problem, it was already fixed. The output is stable, well understood and on the low end of the “surprise” scale. It has only a few options that you can tweak and most of them, you’ll never even need to know about. No need to put that under version control.

The home grown thing is much more messy. New, big features are added all the time. Stuff that worked yesterday breaks today. No one has time for writing proper tests. In this kind of situation, you will often need to compare today’s output with a “known good state”. There is a dozen of roughly understood config options for many things that might make sense if you were insane. Putting the generated code under version control in this situation is a must have since it will make your life easier.

The next level is that the code generator itself is mature bit it offers a ton of config options. Hypothetically, you will know the correct ones to use before you use the generator for the first and only time. Stop laughing. In practice, your understanding of config options will evolve. As you encounter bugs and solutions, you will need to know what else a config change breaks. Make your life easy and use version control: Config change, regenerate, look at diff, try again.

In a similar fashion, learning to use the code generator in an efficient and useful way will take time. You will make mistakes and learn from them. That won’t stop a co-worker from making the same mistakes or other ones. Everyone in the team has to learn to use the tool. Version control will prevent from one person breaking things for others.

How

Write a parameterized unit test which generates the code in a temporary folder. In the end, each file should be a single test which compares the freshly generated version with the one in the source tree.

Add one test at the end which checks that the list of files in both folders is the same (to catch newly generated files and files which have to be deleted).

Add a command line option which overwrites the source files with the ones created by the test. That way, you can both catch unexpected changes in your CI builds and efficiently update thousands of files when you want.

The logic in the test should be:

expected = content freshly generated file
actual = content  of the file in the source tree 
      or just the file name if the file doesn't exist (makes it
      easier to find the file when the test itself is broken).

if expected != actual, then
    if (overwrite) then copy expected to actual
    assert expected == actual

Use a version of the assert that shows a diff in your IDE. That way, you can open the file in your IDE and use copy&paste out of the diff window to fix small changes to get a feeling how they work.

Or you can edit the sources until they look the way they should and then tweak config options until the tests confirm that the code generator now produces the exact desired result.

Bonus: You can tweak the generated code in your unit test. It’s as simple as applying patches in the “read content of the freshly generated file” step. One way you can use this is to fix all the IDE warnings in the generated code to get a clean workplace. But you can also patch any bugs that the code generator guys don’t want to fix.

Workaround

If you don’t want to put all generated code under version control, you can create a spike project to explore all the important features. In this spike, you create an example for every feature you need and put the output under version control. That way, you don’t have to put millions of lines under version control.

The drawback is that you need a team of disciplined individuals who stick to the plan. In most teams, this kind of discipline is shot in the back by the daily business. If you find yourself in a mess after a few weeks: Put everything under version control. It’s a bit of wasted disk space. Say, $10 per month. If you have to discuss this with the team for more than five minutes, the discussion was already much more expensive.


Children can become anything they want

26. June, 2022

The difference is that some people think “they” means “the children” while other people think it means themselves.


Another Reason to Avoid Comments

16. April, 2022

There is this old saying that if you feel you have to write a comment to explain what your code does, you should rather improve your code.

In the talk above, I heard another one:

A common fallacy is to assume authors of incomprehensible code will somehow be able to express themselves lucidly and clearly in comments.
– Kevlin Henney


Dark Forest is a Fairy Tale

16. December, 2021

The Dark Forest, an idea developed by Liu Cixin for his Remembrance of Earth’s Past series (also known for the first book “The Three-Body Problem” is just a fairy tale: Interesting to think about, there is a morale but it’s not based on reality.

Proof: We are still here.

The Dark Forest fairy tale is a solution to the Fermi paradox: If there are billions of planets like earth out there, where is everyone? The Dark Forest claims that every civilization that is foolish enough to expose itself gets gets wiped out.

Fact: We have exposed ourselves for millions of years now. Out planet has sent signals “lots of biological life here” for about 500 million years to anyone who cares.

Assuming that the Milky Way has a size of 100’000 light years, this means every civilization out there know about Earth for at least 499.9 million years. If they were out to kill us, we would be long gone by now. Why wait until we can send rockets to space if they are so afraid of any competition?

How would they know about us? We can already detect planets in other star systems (the current count at the writing of this article is 4604, see http://www.openexoplanetcatalogue.com/). In a few years, we’ll be able to tell all the planets close to us which can carry life, for values of close around 100 light years. A decade later, I expect that to work for any star system 1’000 light years around us. In a 100 years, I’ll expect scientists to come up with a trick to scan every star in our galaxy. An easy (if slow) way would be to send probes up and down out of the disk to get a better overview. Conclusion: We know a way to see every star system in the galaxy today. It’s only going to get better.

Some people worry that the technical signals we send could trigger an attack but those signals get lost in the background noise fairly quickly (much less that 100 light years). This is not the case for the most prominent signal: The amount of oxygen in Earth’s atmosphere. If you’re close to the plane of the ecliptic (i.e. when you look at the sun, the Earth will pass between you and the sun), you can see the Oxygen line in the star’s spectrum for thousands of light years. Everyone else has to wait until Earth moves in front of some background object.

There is no useful way to hide this signal. We could burn the oxygen, making Earth inhospitable. Or we could cover the planet with a rock layer; also not great unless you can live from a rock and salt water diet.

For an economical argument: When Ruanda invaded the Democratic Republic of Congo to get control of Coltan mining, they made roughtly $240 million/yr from selling the ore. China makes that much money by selling smart phones and electronics to other states every day (source: Home Deus by Yuval Harari). My take: killing other civilizations is a form of economical suicide.

Conclusion: The Dark Forest is an interesting thought experiment. As a solution for the Fermi paradox, I find it implausible.


What are Software Developers Doing All Day?

6. November, 2021

Translate.

Mathematics? Nope. I use the trigonometric functions like sin(x) to draw nice graphics in my spare time but I never use them at work. I used logarithmic last year to round a number but that’s about it. Add, multiply, subtract and divide is all the math that I ever do and most of that is “x = x + 1”. If I have to do statistics, I use a library. No need to find the maximum of a list of values myself.

So what do we do? Really?

We translate mumble into Programming Language of the Year(TM).

Or more diplomatic: We try to translate the raw and unpolished rambling of clients into the strict and unforgiving rules of a programming language.

We’re translators. Like those people who translate between human languages. We know all the little tricks how to express ourselves and what you can and can’t easily express. After a while, we can distinguish between badly written code and the other kind, just like an experienced journalist.