Why Apple Sucks

10. November, 2017

People often ask my why I hate Apple products. My answer: “I’m not compatible.”

I was always looking for a good example what I mean. In my hands, Apple products crash, show weird behavior or important features are utterly broken.

Case in point: Apple has released an “everyone can write an App” course on iTunes. Here is what I did:

  1. I clicked the link
  2. itunes.apple.com opened in my web browser
  3. A few seconds, nothing happened
  4. Then I got a message box: “Do you want to open this link in iTunes?”
  5. Uhh… yes?
  6. iTunes opened. Showing my music library. Where is the product that you were supposed to open? Thanks for wasting 10 seconds of my life.
  7. Clicked the link again.
  8. Clicked “yes” again
  9. Now iTunes shows a page with “Swift Playgrounds”. Probably what I wanted to see, I’m not sure anymore.
  10. I click on the product.
  11. iTunes opens a web page in my browser. WTF???? What’s the point of having iTunes when it can’t even download something!?
  12. The web page says “Please install iTunes.”
  13. I give up.

That’s one example in which Apple products waste my time. It’s almost always like that.

Apple, I hate you.


Artificial Ethics

24. October, 2017

While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?

Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.

The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.

Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).

Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.

It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).

Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.

That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.

Related articles:


Your Phone Should be Your Butler

18. October, 2017

A lot of people share private details with the world without being aware of it. For example, they take nude pictures with their phones (NSA keeps a copy, just in case) or they sell the phone without wiping it properly, allowing the next owner to get a good idea who you are, or they install apps like the one from Facebook which ask “can I do whatever I want with anything I find on your phone?” and people happily click the “Yeah, whatever” button (a.k.a “Accept”).

When people use modern technology, they have a mental model. That model tells them what to expect when they do something (“press here and the screen will turn on”). It also contains other expectations that are rooted in social behavior. Like “I take good care of my phone and it will take good care of me (and my data)”.

That, when you think about it, is nonsense.

A phone is not a butler. In essence, a phone is a personal data collecting device with additional communication capabilities. But the main goal is to learn about you and then manipulate you to buy stuff. It’s about money. Companies want it, you have it, they want you to give it to them. Anything else only exists to facilitate this process. If pain would increase revenue, we’d be living in hell.

Case in point: Speech based input. When you click on a page, that doesn’t tell much about you. When you use your finger, the phone can at least feel when you’re trembling. Are you angry or enthusiastic? We’re getting there. But your voice is rich with detail about your emotional state. More data to milk to make you perfect offers which you simply don’t want to refuse.

A butler, on the other hand, has your interests in mind. They keep private information private instead of selling it to the highest bidder. They look out for you.

The source of the difference? You pay a butler. (S)he is literally working for you. On the phone, a lot of people expect the same service to happen magically and for free. Wrong planet, pals.

Wouldn’t it be great if phones were like butlers? Trustworthy, discreet and helpful instead of just trying to be helpful?

I hope we’ll see more technology like the app Nude (which hides sensitive photos on your phone).

Related:


Spreading Bad Software is Immoral

29. September, 2017

From Fefe’s Internet Security Days keynote:

Schlechte Software zu verbreiten ist unmoralisch.

Translation: Spreading sloppy software is immoral. It’s like producing waste and dumping it into a river. Properly handling would be expensive, illegal dumping saves money and turns it into a SEP.

Writing sloppy software is similar. Instead of investing time into doing it right, you try to externalize costs: The client will somehow (have to) deal with it. They either have to pay you to make it better the second time or they have to spend time and nerves every day to work around shortcomings.

When we see someone dump toxic waste in a forest, most people are outraged. The same people, when they are managers of a software company, sign contracts that define the delivery date of something before knowing the requirements. Software developers, desperately trying to feel and look competent, shout “Done!” only to collapse into a morose heap of self-pity after a minimum of poking what this “done” really means.

Fefe is arguing that doing it right is as expensive as doing it sloppily. I have the same hunch. I’ve seen numbers on the  Standish Group Chaos Report (alt: Wikipedia, German) which gives a good indication how much failing projects cost: Around 20% are a total waste of money since they are eventually killed, 52% cost twice as much, only 30% make it in time, in budget and with the promised feature set (note: I bet at least half of those 30% made it because the feature set was reduced/readjusted during the project).

If you assume that in 2014, $250 billion was spent on software development in the US, that means cost of $50 billion on failed projects alone. That is our money. Your’s and mine. Companies don’t magically get money, they sell products and each wasted project eventually means additional figures on some price tag in a shop.

Then we have $125 billion which should have been $62 billion but another $62 billion was necessary to make it to the finishing line. It’s a harder to tell how much of that is wasted. You can’t count projects that were simply underestimated or feature creep – additional features cost additional money, so it’s out of budget but not wasted. Let’s assume $10 billion (less than 10% waste overall) in this group.

In a perfect world, that would mean we could spend 24% ($60 billion out of $250) more on software quality without any additional cost.

Related articles:


Wrong colors in Windows Photo Viewer / Falsche Farben in Windows-Fotoanzeige

18. September, 2017

When colors in the Windows Photo Viewer (Win 8 to 10) look oddly wrong, it’s probably because of something called “color profile“.

Solution:

  1. Open “System Control” (Systemsteuerung)
  2. “Color Management” Farbverwaltung
  3. Select your monitor
  4. Check “Use my settings for this device” (“Eigene Einstellungen für das Gerät verwenden”) (otherwise, you can’t click anything at the bottom)
  5. Hinzufügen
  6. Select “sRGB IEC61966-2.1”
  7. Click “Set as Default Profile” (“Als Standardprofil festlegen”)
  8. Repeat for each monitor
  9. Restart Photo Viewer (Fotoanzeige)

Thunderbird: Using Images in Signature

24. August, 2017

lifewire has an excellent article how to put an image into mail signatures: “Automatically Use a Picture in a Thunderbird Signature

Keep in mind that those images add a lot of bloat to your mail. If you just send a few sentences, then even a small image can easily make the mail hundreds or thousands of times bigger.


Good and Bad Tests

16. January, 2017

How do you distinguish good from bad tests in your code?

Check these criteria. Good tests

  • Nail down expectations
  • Monitor assumptions
  • Help to locate the cause of a failure
  • Document usage patterns
  • Allow to change code
  • Allow to verify changes
  • Are short (LOC + time)

Bad tests

  • Waste development time
  • Execute many, many lines of code
  • Prevent code changes
  • Need more time to write than the code they test
  • Need a lot of code to set up
  • Take ages to execute
  • Are hard to run

Expectations

There are a lot of checks in your compiler. Those help to catch mistakes you make. Do the same with your tests. There are a lot of things that compilers don’t check: File encodings, existence of files, existence of config options, types of config options.

Use tests to nail down your expectations. Read config files and validate the odd option.

Create a test which collects the whole configuration of your program and checks it against a known state. Check that each config option is set only once (or at least that it has the same value in all places).

When you need to translate your texts, add tests which make sure that you have all the texts that you need, that texts are unique, etc.

Assumptions

Convention over configuration only works when everyone agrees what the convention is. Conventions are assumptions. Your brain has to know them since they are no longer in the code. If this approach fails for you, write a test that validates your assumptions.

Check that code throws the exceptions that you expect.

If you have found a bug in a framework and added a workaround, add a test which fails when the bug is fixed. Add a comment “If this test fails, you can remove the workaround.”

Speed

The world speeds up. No one can afford slow tests, tests that are hard to understand, hard to maintain, tests that get in the way of Get-Things-Done™. Make sure you can run your tests at the touch of a button. Make sure they never fail unless they should. Make sure they fail when they should. Make sure they are small (= execute fast, few lines to understand, little code to write, easy to change, cheap to delete). Ten tests, each asserting a single fact, are better than one test that asserts ten facts. If your tests run for more than ten seconds, you lose.

Documentation

There is code rot. But long before that, there is documentation rot. Who has time to update the comments and documentation after a code change?

Why not document code usage in tests? Tests tell you the Truth™. Why give someone 100 pages of words they can’t trust when you can give them 100 unit tests they can execute?

Conclusion

Make your life easier. Stop wasting time in your debugger, begging for production log files, running code in your head. Write a good test today, it will watch your back for as long as the project lives. Write a thousand good tests and they will be like an army of angels, warding you from suffering, every time you press that button.


Quality in Software

19. November, 2016

Every software developer has seen too much bad code. Which raises the question what good examples are there?

Python, for one. Coverity, which scans software for all kinds of flaws, ranks the quality of Python among the best. Just 0.005 defects per 1,000 lines of code (the average being between 15/kloc and 50/kloc).


Risks of Artificial Intelligence

10. November, 2016

There is a growing group of people arguing how AIs will one day kill us, either by loving or hating us to death. I find their arguments interesting but lacking an important factor: AI is created by (a few) humans.

That means AIs will inherit features from their creators:

  1. Humans make mistakes, so parts of the AI won’t do what they should.
  2. Each human defines “good” in a different way at a different time.
  3. The road to hell is paved with good intentions.

My addition to the discussion is thus: Even if we do everything “as right as possible”, the result will still be “surprising.”

Mistakes

Mistakes happen at all levels of software development. They can be made during the requirements phase, when the goals are set. Requirements often are vague, incomplete, missing or outright wrong.

Software developers then make mistakes, too. They misunderstand the requirements, they struggle with the programming language, their brain simply isn’t at the top of its abilities 100% of the time.

When it comes to AI, the picture gets even more muddled. Nobody knows what “AI” really is. If two people work on the same “AI” problem, their starting set of assumptions is very different.

In many cases, we use neural networks. Nobody really understands neural networks which is the key factor: They “learn” by themselves, even if we don’t know what exactly. So they come up with “solutions” without a lot of effort on the human side which is great. It “just works”. Many such projects failed because the neural networks tracks a spurious correlation – something that happens to us humans every day.

Good

What is “good“? Is it good when you add a feature to the software? When you’re really uneasy about it? When it’s immoral? Illegal? If it means keeping your job?

Is the success of a project good? What is “success”? It’s completed within time? Within budge? It’s somewhat completed at all? When the result is a rogue AI because too many corners were cut?

Unintentional Side Effects

The book “Avogadro Corp” tells the story of an AI which is created on purpose. The creator failed to take into account that he’s not alone. Soon, the AI acquired resources which it was never meant to have. People are killed, wars are prevented. Is that “success”?

Many people believe that strong leaders are “good” even when all the evidence says otherwise. They translate an insecurity into a wishful fact. If the wish of these people – often the majority – is granted, is that “good?” Is it good to allow a person to reject medicine which would save them because of personal belief? When all evidence suggests that the belief is wrong? Is it good to force happiness on people?

We want AIs to have an impact on the real world – avoid collisions with other people and cars, select the best medicine, make people spend more money on things they “need”, detect “abnormal” behavior of individuals in groups, kill enemies efficiently. Some of those goals are only “good” for a very small group of people. For me, that sounds like the first AIs won’t be created to serve humanity. The incentive just isn’t there.

Conclusion

AIs are built by flawed humans; humans who can’t even agree on a term like “good”. I feel that a lot of people trust AIs and computers because they are based on “math” and math is always perfect, right? Well, no, it’s not. In addition, the perceived perfection of math is diluted by greed, stupidity, lack of sleep and all the other human factors.

To make things worse, AIs are created to solve problems beyond the capability of humans. We use technologies to build them which we cannot understand. The goals to build AIs are driven by greed, fear, stupidity and hubris.

Looking back at history, my prediction is that the first AIs will probably be victim of the greatest mental human power: ignorance.


When Uncle Doc Gets Hacked

6. August, 2016

Most of the time, when users get infected with a computer virus or a Trojan, it’s a nuisance. But what happens when an important person becomes a victim of a cracker like your doctor?

How about this story:

I got a mail from a good friend. It had no text, just a link. I clicked the link and a web site of a big pharmaceutical company. It was a bit odd but I thought nothing of it. I’m a doctor, so I visit a lot of medical websites.

A couple of days after that, I got mails from old friends that thanked me for getting in touch with them again after such a long time. I was puzzled.

Yesterday, I got an email from myself. That I never wrote. It seems when I clicked the link above in my web mail, “something” happened.

Apparently, everyone in my address book got spammed.

The attackers got the address book. Which is inside the mail software. Which means they had access to the mail software. Which means they had access to all the mails. Do you exchange mails with your doctor? How much do you like the idea that “someone” out there had access to those mails?

We need to fix computer security.