Your Phone Should be Your Butler

18. October, 2017

A lot of people share private details with the world without being aware of it. For example, they take nude pictures with their phones (NSA keeps a copy, just in case) or they sell the phone without wiping it properly, allowing the next owner to get a good idea who you are, or they install apps like the one from Facebook which ask “can I do whatever I want with anything I find on your phone?” and people happily click the “Yeah, whatever” button (a.k.a “Accept”).

When people use modern technology, they have a mental model. That model tells them what to expect when they do something (“press here and the screen will turn on”). It also contains other expectations that are rooted in social behavior. Like “I take good care of my phone and it will take good care of me (and my data)”.

That, when you think about it, is nonsense.

A phone is not a butler. In essence, a phone is a personal data collecting device with additional communication capabilities. But the main goal is to learn about you and then manipulate you to buy stuff. It’s about money. Companies want it, you have it, they want you to give it to them. Anything else only exists to facilitate this process. If pain would increase revenue, we’d be living in hell.

Case in point: Speech based input. When you click on a page, that doesn’t tell much about you. When you use your finger, the phone can at least feel when you’re trembling. Are you angry or enthusiastic? We’re getting there. But your voice is rich with detail about your emotional state. More data to milk to make you perfect offers which you simply don’t want to refuse.

A butler, on the other hand, has your interests in mind. They keep private information private instead of selling it to the highest bidder. They look out for you.

The source of the difference? You pay a butler. (S)he is literally working for you. On the phone, a lot of people expect the same service to happen magically and for free. Wrong planet, pals.

Wouldn’t it be great if phones were like butlers? Trustworthy, discreet and helpful instead of just trying to be helpful?

I hope we’ll see more technology like the app Nude (which hides sensitive photos on your phone).

Related:


Spreading Bad Software is Immoral

29. September, 2017

From Fefe’s Internet Security Days keynote:

Schlechte Software zu verbreiten ist unmoralisch.

Translation: Spreading sloppy software is immoral. It’s like producing waste and dumping it into a river. Properly handling would be expensive, illegal dumping saves money and turns it into a SEP.

Writing sloppy software is similar. Instead of investing time into doing it right, you try to externalize costs: The client will somehow (have to) deal with it. They either have to pay you to make it better the second time or they have to spend time and nerves every day to work around shortcomings.

When we see someone dump toxic waste in a forest, most people are outraged. The same people, when they are managers of a software company, sign contracts that define the delivery date of something before knowing the requirements. Software developers, desperately trying to feel and look competent, shout “Done!” only to collapse into a morose heap of self-pity after a minimum of poking what this “done” really means.

Fefe is arguing that doing it right is as expensive as doing it sloppily. I have the same hunch. I’ve seen numbers on the  Standish Group Chaos Report (alt: Wikipedia, German) which gives a good indication how much failing projects cost: Around 20% are a total waste of money since they are eventually killed, 52% cost twice as much, only 30% make it in time, in budget and with the promised feature set (note: I bet at least half of those 30% made it because the feature set was reduced/readjusted during the project).

If you assume that in 2014, $250 billion was spent on software development in the US, that means cost of $50 billion on failed projects alone. That is our money. Your’s and mine. Companies don’t magically get money, they sell products and each wasted project eventually means additional figures on some price tag in a shop.

Then we have $125 billion which should have been $62 billion but another $62 billion was necessary to make it to the finishing line. It’s a harder to tell how much of that is wasted. You can’t count projects that were simply underestimated or feature creep – additional features cost additional money, so it’s out of budget but not wasted. Let’s assume $10 billion (less than 10% waste overall) in this group.

In a perfect world, that would mean we could spend 24% ($60 billion out of $250) more on software quality without any additional cost.

Related articles:


Wrong colors in Windows Photo Viewer / Falsche Farben in Windows-Fotoanzeige

18. September, 2017

When colors in the Windows Photo Viewer (Win 8 to 10) look oddly wrong, it’s probably because of something called “color profile“.

Solution:

  1. Open “System Control” (Systemsteuerung)
  2. “Color Management” Farbverwaltung
  3. Select your monitor
  4. Check “Use my settings for this device” (“Eigene Einstellungen für das Gerät verwenden”) (otherwise, you can’t click anything at the bottom)
  5. Hinzufügen
  6. Select “sRGB IEC61966-2.1”
  7. Click “Set as Default Profile” (“Als Standardprofil festlegen”)
  8. Repeat for each monitor
  9. Restart Photo Viewer (Fotoanzeige)

Risks of Artificial Intelligence

10. November, 2016

There is a growing group of people arguing how AIs will one day kill us, either by loving or hating us to death. I find their arguments interesting but lacking an important factor: AI is created by (a few) humans.

That means AIs will inherit features from their creators:

  1. Humans make mistakes, so parts of the AI won’t do what they should.
  2. Each human defines “good” in a different way at a different time.
  3. The road to hell is paved with good intentions.

My addition to the discussion is thus: Even if we do everything “as right as possible”, the result will still be “surprising.”

Mistakes

Mistakes happen at all levels of software development. They can be made during the requirements phase, when the goals are set. Requirements often are vague, incomplete, missing or outright wrong.

Software developers then make mistakes, too. They misunderstand the requirements, they struggle with the programming language, their brain simply isn’t at the top of its abilities 100% of the time.

When it comes to AI, the picture gets even more muddled. Nobody knows what “AI” really is. If two people work on the same “AI” problem, their starting set of assumptions is very different.

In many cases, we use neural networks. Nobody really understands neural networks which is the key factor: They “learn” by themselves, even if we don’t know what exactly. So they come up with “solutions” without a lot of effort on the human side which is great. It “just works”. Many such projects failed because the neural networks tracks a spurious correlation – something that happens to us humans every day.

Good

What is “good“? Is it good when you add a feature to the software? When you’re really uneasy about it? When it’s immoral? Illegal? If it means keeping your job?

Is the success of a project good? What is “success”? It’s completed within time? Within budge? It’s somewhat completed at all? When the result is a rogue AI because too many corners were cut?

Unintentional Side Effects

The book “Avogadro Corp” tells the story of an AI which is created on purpose. The creator failed to take into account that he’s not alone. Soon, the AI acquired resources which it was never meant to have. People are killed, wars are prevented. Is that “success”?

Many people believe that strong leaders are “good” even when all the evidence says otherwise. They translate an insecurity into a wishful fact. If the wish of these people – often the majority – is granted, is that “good?” Is it good to allow a person to reject medicine which would save them because of personal belief? When all evidence suggests that the belief is wrong? Is it good to force happiness on people?

We want AIs to have an impact on the real world – avoid collisions with other people and cars, select the best medicine, make people spend more money on things they “need”, detect “abnormal” behavior of individuals in groups, kill enemies efficiently. Some of those goals are only “good” for a very small group of people. For me, that sounds like the first AIs won’t be created to serve humanity. The incentive just isn’t there.

Conclusion

AIs are built by flawed humans; humans who can’t even agree on a term like “good”. I feel that a lot of people trust AIs and computers because they are based on “math” and math is always perfect, right? Well, no, it’s not. In addition, the perceived perfection of math is diluted by greed, stupidity, lack of sleep and all the other human factors.

To make things worse, AIs are created to solve problems beyond the capability of humans. We use technologies to build them which we cannot understand. The goals to build AIs are driven by greed, fear, stupidity and hubris.

Looking back at history, my prediction is that the first AIs will probably be victim of the greatest mental human power: ignorance.


Installing Epson Perfection V300 Photo on openSUSE 13.2

3. July, 2015

Locate Linux drivers on http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX
Search for “v300”

The search gives two results:

  1. “iscan plugin package” from 2011
  2. “core package&data package” from 2015

You need both. The first one is esci-interpreter-gt-f720-0.1.1-2.* which is a necessary plugin for iscan to enable the software to talk to the scanner via USB. Without it, you get odd “Permission denied” errors and “scanimage --list-devices” will come back with “No scanners were identified.”

Then get iscan-2.30.1-1.usb0.1.ltdl3.x86_64.rpm (not sure what those files with a ~ in the name at the top are) plus iscan-data-1.36.0-1.noarch.rpm from the second search result.

Install all three of them at the same time.

Both scanimage and xsane should now be able to detect and use the scanner.

Related:


SoCraTes Day: Testing the Impossible

21. June, 2015

I’m back from SoCraTes Day Switzerland where I help a Code&Hack session called “Testing the Impossible”.

The session is based on this Mercurial repository de.pdark.testing.

Transcript

  • 20150619_105103-socrates-day-testing-the-impossible-page1Space Shuttle – They Write the Right Stuff
    • Why wasn’t the bug caught by QA?
    • Why did the bug escape dev?
  • Personal Bug Diary
  • Team Culture: Strengths va. Blame Game
  • Miscommunication with Customers
  • Anyone can press the “Red Button
  • Make failures cheap. Fail fast.
  • I’m wrong. I have learned something.
  • Bug-.hunting takes time and mindshare.
  • If you found a bug, write a test for it

How do I test randomness?

  1. Test small pieces
  2. Fix things that varies (IDs, timestamps, etc.)

How do I test a DB?

  • 20150619_105103-socrates-day-testing-the-impossible-page2Layer between DB and App code which allows to switch between real DB and mock
  • Verify generated SQL instead of executing it
    • Fast
    • Allows to test thousands of combinations easily
    • Useful for code that builds search queries
  • Run SQL against real DB during the night (CI server)
  • H2 embedded DB can emulate Oracle, MySQL, PostgreSQL, …
  • Test at various depths (speed vs. accuracy)
  • Testing Walrus dives deep!

c’t Inhaltsverzeichnis sortieren

23. December, 2014

Eine kleine Hilfe für alle Leser der c’t, welche einfach(er) einige Artikel archivieren wollen: Das verlinkte Script unten sortiert das Inhaltsverzeichnis im Archiv nach Seitenzahl wenn man auf “aktuell” klickt.