Code Formatting by Example

15. December, 2017

People are starting to use Machine Learning to write better code.

One example is code formatting. Instead of painstakingly writing a parser and then rules which are to be applied to each node of the Abstract Syntax Tree (AST), the guys from Codebuff are using the “learn by example approach”. Just feed Codebuff with a couple of examples and it will learn to apply those implied rules to source code.

If you find something that looks bad, you don’t have to dig through dozens of pages of code formatter options or code formatter config files, just add more examples.

If you’re using Xtext, you can use the tool to write code formatters for you DSL: “Machine Learning Formatting with Xtext


Climate Change Visible

7. December, 2017

In the early days of by blog, almost 10 years ago, I posted “Why You Should Bother About 2°“. In the mean time, I’ve found another graphic that helps to understand that something is more broken than usual. Leisurely scroll down for a nice reminder of our history and how long things take to change.

xkcd: Earth Temperature Timeline

xkcd: Earth Temperature Timeline


Why Apple Sucks

10. November, 2017

People often ask my why I hate Apple products. My answer: “I’m not compatible.”

I was always looking for a good example what I mean. In my hands, Apple products crash, show weird behavior or important features are utterly broken.

Case in point: Apple has released an “everyone can write an App” course on iTunes. Here is what I did:

  1. I clicked the link
  2. itunes.apple.com opened in my web browser
  3. A few seconds, nothing happened
  4. Then I got a message box: “Do you want to open this link in iTunes?”
  5. Uhh… yes?
  6. iTunes opened. Showing my music library. Where is the product that you were supposed to open? Thanks for wasting 10 seconds of my life.
  7. Clicked the link again.
  8. Clicked “yes” again
  9. Now iTunes shows a page with “Swift Playgrounds”. Probably what I wanted to see, I’m not sure anymore.
  10. I click on the product.
  11. iTunes opens a web page in my browser. WTF???? What’s the point of having iTunes when it can’t even download something!?
  12. The web page says “Please install iTunes.”
  13. I give up.

That’s one example in which Apple products waste my time. It’s almost always like that.

Apple, I hate you.


Artificial Ethics

24. October, 2017

While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?

Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.

The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.

Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).

Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.

It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).

Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.

That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.

Related articles:


Your Phone Should be Your Butler

18. October, 2017

A lot of people share private details with the world without being aware of it. For example, they take nude pictures with their phones (NSA keeps a copy, just in case) or they sell the phone without wiping it properly, allowing the next owner to get a good idea who you are, or they install apps like the one from Facebook which ask “can I do whatever I want with anything I find on your phone?” and people happily click the “Yeah, whatever” button (a.k.a “Accept”).

When people use modern technology, they have a mental model. That model tells them what to expect when they do something (“press here and the screen will turn on”). It also contains other expectations that are rooted in social behavior. Like “I take good care of my phone and it will take good care of me (and my data)”.

That, when you think about it, is nonsense.

A phone is not a butler. In essence, a phone is a personal data collecting device with additional communication capabilities. But the main goal is to learn about you and then manipulate you to buy stuff. It’s about money. Companies want it, you have it, they want you to give it to them. Anything else only exists to facilitate this process. If pain would increase revenue, we’d be living in hell.

Case in point: Speech based input. When you click on a page, that doesn’t tell much about you. When you use your finger, the phone can at least feel when you’re trembling. Are you angry or enthusiastic? We’re getting there. But your voice is rich with detail about your emotional state. More data to milk to make you perfect offers which you simply don’t want to refuse.

A butler, on the other hand, has your interests in mind. They keep private information private instead of selling it to the highest bidder. They look out for you.

The source of the difference? You pay a butler. (S)he is literally working for you. On the phone, a lot of people expect the same service to happen magically and for free. Wrong planet, pals.

Wouldn’t it be great if phones were like butlers? Trustworthy, discreet and helpful instead of just trying to be helpful?

I hope we’ll see more technology like the app Nude (which hides sensitive photos on your phone).

Related:


Spreading Bad Software is Immoral

29. September, 2017

From Fefe’s Internet Security Days keynote:

Schlechte Software zu verbreiten ist unmoralisch.

Translation: Spreading sloppy software is immoral. It’s like producing waste and dumping it into a river. Properly handling would be expensive, illegal dumping saves money and turns it into a SEP.

Writing sloppy software is similar. Instead of investing time into doing it right, you try to externalize costs: The client will somehow (have to) deal with it. They either have to pay you to make it better the second time or they have to spend time and nerves every day to work around shortcomings.

When we see someone dump toxic waste in a forest, most people are outraged. The same people, when they are managers of a software company, sign contracts that define the delivery date of something before knowing the requirements. Software developers, desperately trying to feel and look competent, shout “Done!” only to collapse into a morose heap of self-pity after a minimum of poking what this “done” really means.

Fefe is arguing that doing it right is as expensive as doing it sloppily. I have the same hunch. I’ve seen numbers on the  Standish Group Chaos Report (alt: Wikipedia, German) which gives a good indication how much failing projects cost: Around 20% are a total waste of money since they are eventually killed, 52% cost twice as much, only 30% make it in time, in budget and with the promised feature set (note: I bet at least half of those 30% made it because the feature set was reduced/readjusted during the project).

If you assume that in 2014, $250 billion was spent on software development in the US, that means cost of $50 billion on failed projects alone. That is our money. Your’s and mine. Companies don’t magically get money, they sell products and each wasted project eventually means additional figures on some price tag in a shop.

Then we have $125 billion which should have been $62 billion but another $62 billion was necessary to make it to the finishing line. It’s a harder to tell how much of that is wasted. You can’t count projects that were simply underestimated or feature creep – additional features cost additional money, so it’s out of budget but not wasted. Let’s assume $10 billion (less than 10% waste overall) in this group.

In a perfect world, that would mean we could spend 24% ($60 billion out of $250) more on software quality without any additional cost.

Related articles:


Wrong colors in Windows Photo Viewer / Falsche Farben in Windows-Fotoanzeige

18. September, 2017

When colors in the Windows Photo Viewer (Win 8 to 10) look oddly wrong, it’s probably because of something called “color profile“.

Solution:

  1. Open “System Control” (Systemsteuerung)
  2. “Color Management” Farbverwaltung
  3. Select your monitor
  4. Check “Use my settings for this device” (“Eigene Einstellungen für das Gerät verwenden”) (otherwise, you can’t click anything at the bottom)
  5. Hinzufügen
  6. Select “sRGB IEC61966-2.1”
  7. Click “Set as Default Profile” (“Als Standardprofil festlegen”)
  8. Repeat for each monitor
  9. Restart Photo Viewer (Fotoanzeige)