An Interstellar Transport System

3. August, 2009

Mankind has been dreaming about an interstellar transport system for a long time (some were nightmares). There are a lot of ideas around which use exotic matter, worm holes, etc. A few years ago, a colleague proposed something much more simple. It works like this:

Take a body in our own solar system and drill a hole to the center. Earth is a bit too hot but the Moon would work. Mars would be better but that’s too far away for a first prototype. In the center, build a chamber which can be isolated completely from the outside. That would need a lot of technology but it should be possible to create a room where an object would not be influenced by any external forces or any information exchange (photons).

Next, you create a similar chamber in a second place. This can be anywhere in the universe. Distance doesn’t matter. Only that you have two places that you can “seal” from the rest of the universe.

Now the trick: You put an object in chamber 1 and measure whether it is still there. You do the same in chamber 2. There are ways to measure with slightly different probabilities for whether you’ll find an object or not. This is a quantum effect. In chamber 1, you’ll use an effect with a slightly higher probability that it will return “nothing here”, in chamber 2, you’ll use a method with a slight lean towards “object is here”.

In normal physics, this is ridiculous but not at a quantum level. At a quantum level, as soon as you completely isolate an object from the outside and make sure that no information whatsoever can be exchanged with the surrounding chamber, it doesn’t matter anymore in which chamber the object really is. You have created a really huge qubit, a system where nobody can say anymore at which place the object currently is.

In that very moment, as soon as no one can tell, the universe will stop caring and by using the clever measurement technique, you constantly nudge the object to “jump” to chamber 2. After a long but finite number of measurements, it will be gone from chamber 1. And if no other alien race came up with the same idea, it must now be in chamber 2 because that’s the only other place in the universe where it could be.

If you wonder why we had to drill to the center of a moon or planet: That’s the only place in our solar system where no gravitational forces pull at an object (gravity would also count as “information exchange”). You could try a Lagrangian point but there, you’d have a hard to time to shield the object from all the radiation in space — the mass of the moon would do that for you. Next, you’ll need to cool the chamber to absolute zero to prevent heat photon exchange and you’ll need a large blob of water to shield the chamber against neutrinos.

As far as I can tell, the only hole in the theory is microgravity: Will it be possible to shield the object from the tiny gravitational forces that still exist even at the core of a big object in free fall? If the answer to this is yes, then we might have an interstellar transporter which is even pretty efficient. Transport will take some time (until the way of measurement has driven the probability up or down enough) but the change of location itself will be instantaneous.

Quantum physics is cool 🙂


Code Comments Stink

27. July, 2009

If you ever need a good argument why code should be readable without comments, look here.

Dead languages are dead because they don’t fit today anymore. Let them rest.


Installing openSUSE 11.1 on an Acer Aspire 5737Z

25. July, 2009

Yesterday, I bought an Acer Aspire 5737Z for my mother. I ran into two issues while trying to install openSuSE 11.1 on it:

  1. System error -1012 during partitioning
  2. Installation of the bootloder failed with error 12: Invalid device requested.

In both cases, the openSuSE failed to enumerate the hard disk partitions correctly. The partition layout was as follows:

  • /dev/sda1 – Unknown partition (probably the recovery program)
  • /dev/sda2 – Windows C (20GB)
  • /dev/sda3 – Extended partition for linux
  • /dev/sda5 – swap partition (2GB)
  • /dev/sda6 – root partition (/ 20GB)
  • /dev/sda7 – home partition (/home rest)

The first error happens when the installer tries to set the type of /dev/sda6 to 82 (swap). That should have been /dev/sda5. The solution is to boot using the rescue system and to partition the disk manually. I suggest to use “cfdisk /dev/sda” for this. Make sure you mark the root partition as bootable.

After that has been done, tell the installer to accept the existing partitioning. You’ll still have to assign the mount points, though, and tell the installer to format the partitions.

Later, grub gets confused in a similar matter. It tries to add the Windows boot manager from (hd0,2) (which maps to sda3; grub starts counting with 0!). That should be (hd0,1). Since everything is installed, we just need to boot the rescue system and chroot to the installed system:

  1. mount /dev/sda6 /mnt – Mount the root filesystem
  2. mount -bind /dev /mnt/dev – Map (bind) the devices into the root filesystem (so that you can access the hard disk, etc)
  3. chroot /mnt bin/bash – Start a shell that behaves as if you had booted from the installed system

You can tell that you’re in a new shell by pressing “Up”. That should recall your last command (chroot). Your first task is to fix the broken grub config. Edit /etc/grub.conf. The first line should read setup --force-lba (hd0). Run grub-install. If it still fails, try to run it manually:

grub
root (hd0,5)
setup --force-lba (hd0)

Note that this will overwrite the Windows boot code. I’m not sure how to boot Windows, now, but really, I don’t care.

Next step on the path to hell is the NVidia driver. I didn’t have much luck with the precompiled one from the NVIDIA repository. Instead, I installed kernel-source and gcc. After that, you can do cd /usr/src/linux ; make oldconfig ; make and abort the build when it starts to build stuff in arch/x86/. Now, you can compile the driver from the source. Just sh ./NVIDIA-Linux-x86_64-185.18.14-pkg2.run, answer all the questions and then run sax2.

In sax2, make sure to select an “LCD monitor” with “1360×768” pixel resolution. After a moment, you should have a clean display.


Infinity

23. July, 2009

Every now and then, I stumble over something awesome. Infinity is a MMO a bit like EvE Online but attempting to avoid most of the mistakes. If the game is as great as the images, we’re in for a real treat.


You Have Been There

22. July, 2009

The first step in an attack is to gather information. You’re probably browsing with Firefox, have all the usual plugins installed (AdBlock Plus, NoScript), you’ve disabled cookies and you think you’re safe.

Security doesn’t work like that. Let me give you an example. You may already know that servers save little bits of information on your computer to recognize you when you return. Cookies.

But there is another way to know where you’ve been. Can you guess it? No? Look at the links. Still nothing? The color? It changes after visiting a site?

So the solution is to use a piece of JavaScript (and almost every site on the ‘net needs JS these days) and examine the color of your links. Gotcha.

Next time, disable your browser history, too. And the cache. And the proxy. And JavaScript. Better yet, don’t start it anymore.


Stopping Spam Crawlers

17. July, 2009

The war against spam is mostly lost. People don’t care about the security of their PCs (if they even know what that means). Bot nets are here to stay. But the bots need crawlers that harvest mail addresses and scientists at the University of Indiana have found out that these come from a relatively small number of IP addresses. Blocking these would effectively cut off the spammers – from getting new addresses.

Until they train their bot nets to crawl.

Link: Blick in die Spammer-Trickkiste (German)


Taking Security Seriously

16. July, 2009

Security of todays operating systems is slowly getting better, meaning that it becomes more and more hard for some fraud to get your credit card number by asking your computer. Asking the person in front of the computer still works. But I digress.

On the DailyWTF is a report how the military handled the problem.

While the idea to actually carry vulnerable parts of the computer away when someone not trustworthy comes close, the solution is really what the military is all about: Make it work, no matter what might go wrong. And be creative about what could go wrong but take the most simple solution (which is the main difference to geeks: we almost never pick the most simple solution).

Which also explains why they clipped parts away from the printouts: Just blackening them might be undone (just holding the paper against the light might be enough) but data, that isn’t there, can’t be abused.

A pity that this simple idea is shunned today. Instead of collecting as little data as possible for a job, as much data as possible is hoarded.


Traits for Groovy/Java

25. June, 2009

I’m again toying with the idea of traits for Java (or rather Groovy). Just to give you a rough idea if you haven’t heard about this before, think of my Sensei application template:

class Knowledge {
    Set tags;
    Knowledge parent;
    List children;
    String name;
    String content;
}
class Tag { String name; }
class Relation { String name; Knowledge from, to;

A most simple model but it contains everything you can encounter in an application: Parent-child/tree structure, 1:N and N:M mappings. Now the idea is to have a way to build a UI and a DB mapping from this code. The idea of traits is to implement real properties in Java.

So instead of fields with primitive types, you have real objects to work with:

    assert "name" == Knowledge.name.getName()

These objects exist partially at the class and at the instance level. There is static information at the class level (the name of the property) and there is instance information (the current value). But it should be possible to add more information at both levels. So a DB mapper can add necessary translation information to the class level and a Hibernate mapper can build on top of that.

Oh, I hear you cry “annotations!” But annotations can suck, too. You can’t have smart defaults with annotations. For example, you can’t say “I want all fields called ‘timestamp’ to be mapped with an java.sql.Timestamp“. You have to add the annotation to each timestamp field. That violates DRY. It quickly gets really bad when you have to do this for several mappers: Database, Hibernate, JPA, the UI, Swing, SWT, GWT. Suddenly, each property would need 10+ annotations!

I think I’ve found a solution which should need relatively few lines of code with Groovy. I’ll let that stew for a couple of days in by subconscious and post another article when it’s well done 🙂


Jazoon, Day 2: XWiki

24. June, 2009

I’m a huge fan of wikis. Not necessarily MediaWiki (holy ugly, Batman, the syntax!). I dig MoinMoin. I just heard the talk by Vincent Massol about next generation wikis. Okay, I can hear you moan under the load of buzzwords but give me a moment. XWiki looks really promising.

Wikis basically allow to publish mostly unstructured data. I say “mostly” because wikis give them some structure but not (too) much: You can organize it in pages and put it into context (by linking between the pages). Often, this is enough. But recently, MediaWiki has started to add support for structured data. See that infobox in the top left corner? But that’s just half-hearted.

XWiki takes this one step further. XWiki, as I understand it, is a framework of loosely coupled components which allow you to create a wiki. The default one is pretty good, too, so most of the time, you won’t even get into this. The cool part about XWiki is that you can define a class inside of it. Let me repeat: You can create a page (like a normal text page) that XWiki will treat as a class definition. So this class gets versioned, etc. You can then add attributes as you like.

After that, you can create instances of this class. The instances are again wiki pages. You can even use more than a single instance on a page, for example, you can have several tag instances and a single person instance. Instances are versioned, too. Of course they are, this is a wiki!

Now you need to display that data. You can use Velocity or Groovy for that. And guess what, the view is … a wiki page. So your designers can create a beautiful look for your the boring raw data. With versions and comments and everything. While some other guys are adding data to the system.

In “normal” wiki pages, you can reference these instances and render them using such a template. The same is true for editors. With a few lines of code, you can create overview pages: All instances of a class or all instances with or without a certain property or you can use Groovy to do whatever you can think of.

Now imagine this: You have an existing database where your marketing guys can, say, plan the next campaign. They can use all the wiki features to collect ideas, filter and verify them, to come up with a really good plan. Some of that data needs to go into a corporate database. In former wikis, you’d have to use an external application, switch back and forth, curse a lot when they get out of sync.

With XWiki, you can finally annotate data in your corporate database with a wiki page, with all the power of a wiki and you can even display the data set in the wiki and edit it there. Granted, the data set won’t be versioned unless your corporate database allows that but it’s simple to do the versioning in the data access layer (for example, you can save all modifications in a log database).

Suddenly, possibilities open up.


Precompiling Custom JARs

24. June, 2009

Java 5 precompiles the rt.jar to a file with the JIT when you start it the first time. This is mostly to improve startup times; it takes only a few moments and is much more efficient than running the JIT when a class has been used more than N times. Next time the class is requested, the VM skips the bytecode loading and directly pulls in the precompiled binary which is already optimized for your CPU.

My idea is to open this process for custom JARs. Any big Java app loads heaps of external JARs and that takes time – often a lot of time. JARs would need to supply a special META-INF file which contains a UUID or a checksum from which the VM can conclude whether the JAR has changed or not.

The first time the JAR shows up in the classpath, the JIT precompiler would convert it, save the result in the cache. Next time, the META-INF file would be read, and the bytecode would be ignored. I’ll open a enhancement request in OpenJDK 7. If you like the idea, please support it.