Silence.
Children can become anything they want
26. June, 2022The difference is that some people think “they” means “the children” while other people think it means themselves.
Key Escrow that Might Work
12. December, 2018Instead of encrypting everything with a single government key, several government agencies need to provide new public keys every day. The private key must be under the control of a court. Each secure encryption channel needs to subscribe to one or more of those agencies. The court must delete those keys after six months.
Advantages:
- No attacker will be able to monitor any channel of communication for a long period of time.
- Generating and sharing new keys can be automated easily.
- A single stolen key will just compromise a small fraction of the whole communication.
- Judges will decide in court which messages can be deciphered during the storage period.
- It’s still possible to decipher all messages of a person if there is a lawful need.
- If a key is lost by accident, the damage is small.
- No one can secretly decode messages.
- The system can be adapted as attackers find ways to game it.
Disadvantages
- More complex than a single key or single source for all keys. It will break more often.
- Pretty expensive.
- Judges need to be trained to understand what those keys mean.
- Keys will be in more hands, creating more points of attack.
Always remember that in a democracy, the law isn’t about justice but balancing demands. There are people afraid that embarrassing details of their private communicate will be exposed as well as people trying to cover the tracks of a crime.
Right now, there is no better way to determine which communication needs to be cracked open than a normal court case.
Reasoning:
If we used one or a few keys to encrypt everything (just because it’s easier), that would put a huge attraction on this data. Criminals will go to great lengths to steal those. If there are many keys, each one of them becomes less important. The amount of damage each key can cause must be smaller in this case. It would also mean they would have to steal many keys which would raise chances to get caught.
I was wondering if one key per month would be enough but there is really no technical reason to create so few. We have the infrastructure to create one every few seconds but that might be overkill. Once per day or maybe once per hour feels like a sweet spot. Note: When the technical framework has been set up, it should be easy to configure it to a different interval.
If we spread the keys over several organizations, an attack on one of them doesn’t compromise everyone. Also, software developers and users can move around, making it harder for unlawful espionage to track them.
Police officers and secret services should not be left alone with the decision what they can watch. Individuals make mistakes. That’s one reason why you talk to a friend when you make important decisions. Therefore, the keys should be in the hands of the law.
The law isn’t perfect. My thoughts are that we would use the perfect system if it existed. Since we’re using the law, the perfect solution probably doesn’t exist or it doesn’t exist, yet. In either case, using court rulings is the best solution we have right now to balance conflicting demands. The keys could be confiscated when the case is started and destroyed when the case is closed to avoid losing access halfway through the proceedings.
Mistakes will happen. Systems will break, keys will be lost, important messages will become indecipherable, criminals will attack the system, idiots will put keys on public network drives. Is there a way that this can be avoided? I doubt it. Therefore, I try to design a system which shows a certain resilience against problems to contain the damage.
For example, a chat app can request keys from its source. If that fails, it has options:
- Use a previous key
- Log an error in another system which monitors health of the key sources
- Automatically ask a different source
- Tell the user about it and refuse to work
- Let the user chose a different source
The Benevolent AI
1. August, 2018There is a lot of argument how AI will kill us all. Some argue that AI will see us a threat and wipe us out, just in case (the Skynet faction). Other argue AI will pamper us to death. The third group hopes that AI will just get bored and leave.
But how about the benevolent AI? Imagine a life which is fulfilling and demanding. It has highs and lows, disasters and triumphs. How about we create an AI with the goal to give such a life to everyone?
Of course, everyone has a different idea what such a life would be. That would make such an effort more complicated but not impossible.
It would also be very expensive to give everyone their perfect life. This factor depends on the amount of people (which will go down by itself) and how close we want everyone to get to the goal of “perfect”. In the beginning, the AI will both be immature and low on resources. Over time, it will learn from mistakes and people will start supporting the idea to give it more money and power when the idea works. So this is a problem which will resolve itself over time.
Then there are people who are never happy with what they have. People who can never get enough. Which I think translates to “the people around such persons don’t know how to teach them to be satisfied”. I think an AI could nudge those greedy people to become more complacent and enjoy their lives more.
This approach would put an end to mobbing. On one hand, people don’t like being mobbed, so the AI would have to put an end to it. On the other hand, I’m pretty sure the mobbers aren’t happy with their lives as well. So making them more satisfied might already be enough to put an end to mobbing.
Why Apple Sucks
10. November, 2017People often ask my why I hate Apple products. My answer: “I’m not compatible.”
I was always looking for a good example what I mean. In my hands, Apple products crash, show weird behavior or important features are utterly broken.
Case in point: Apple has released an “everyone can write an App” course on iTunes. Here is what I did:
- I clicked the link
- itunes.apple.com opened in my web browser
- A few seconds, nothing happened
- Then I got a message box: “Do you want to open this link in iTunes?”
- Uhh… yes?
- iTunes opened. Showing my music library. Where is the product that you were supposed to open? Thanks for wasting 10 seconds of my life.
- Clicked the link again.
- Clicked “yes” again
- Now iTunes shows a page with “Swift Playgrounds”. Probably what I wanted to see, I’m not sure anymore.
- I click on the product.
- iTunes opens a web page in my browser. WTF???? What’s the point of having iTunes when it can’t even download something!?
- The web page says “Please install iTunes.”
- I give up.
That’s one example in which Apple products waste my time. It’s almost always like that.
Apple, I hate you.
Artificial Ethics
24. October, 2017While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?
Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.
The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.
Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).
Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.
It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).
Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.
That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.
Related articles:
Your Phone Should be Your Butler
18. October, 2017A lot of people share private details with the world without being aware of it. For example, they take nude pictures with their phones (NSA keeps a copy, just in case) or they sell the phone without wiping it properly, allowing the next owner to get a good idea who you are, or they install apps like the one from Facebook which ask “can I do whatever I want with anything I find on your phone?” and people happily click the “Yeah, whatever” button (a.k.a “Accept”).
When people use modern technology, they have a mental model. That model tells them what to expect when they do something (“press here and the screen will turn on”). It also contains other expectations that are rooted in social behavior. Like “I take good care of my phone and it will take good care of me (and my data)”.
That, when you think about it, is nonsense.
A phone is not a butler. In essence, a phone is a personal data collecting device with additional communication capabilities. But the main goal is to learn about you and then manipulate you to buy stuff. It’s about money. Companies want it, you have it, they want you to give it to them. Anything else only exists to facilitate this process. If pain would increase revenue, we’d be living in hell.
Case in point: Speech based input. When you click on a page, that doesn’t tell much about you. When you use your finger, the phone can at least feel when you’re trembling. Are you angry or enthusiastic? We’re getting there. But your voice is rich with detail about your emotional state. More data to milk to make you perfect offers which you simply don’t want to refuse.
A butler, on the other hand, has your interests in mind. They keep private information private instead of selling it to the highest bidder. They look out for you.
The source of the difference? You pay a butler. (S)he is literally working for you. On the phone, a lot of people expect the same service to happen magically and for free. Wrong planet, pals.
Wouldn’t it be great if phones were like butlers? Trustworthy, discreet and helpful instead of just trying to be helpful?
I hope we’ll see more technology like the app Nude (which hides sensitive photos on your phone).
Related:
Spreading Bad Software is Immoral
29. September, 2017From Fefe’s Internet Security Days keynote:
Schlechte Software zu verbreiten ist unmoralisch.
Translation: Spreading sloppy software is immoral. It’s like producing waste and dumping it into a river. Properly handling would be expensive, illegal dumping saves money and turns it into a SEP.
Writing sloppy software is similar. Instead of investing time into doing it right, you try to externalize costs: The client will somehow (have to) deal with it. They either have to pay you to make it better the second time or they have to spend time and nerves every day to work around shortcomings.
When we see someone dump toxic waste in a forest, most people are outraged. The same people, when they are managers of a software company, sign contracts that define the delivery date of something before knowing the requirements. Software developers, desperately trying to feel and look competent, shout “Done!” only to collapse into a morose heap of self-pity after a minimum of poking what this “done” really means.
Fefe is arguing that doing it right is as expensive as doing it sloppily. I have the same hunch. I’ve seen numbers on the Standish Group Chaos Report (alt: Wikipedia, German) which gives a good indication how much failing projects cost: Around 20% are a total waste of money since they are eventually killed, 52% cost twice as much, only 30% make it in time, in budget and with the promised feature set (note: I bet at least half of those 30% made it because the feature set was reduced/readjusted during the project).
If you assume that in 2014, $250 billion was spent on software development in the US, that means cost of $50 billion on failed projects alone. That is our money. Your’s and mine. Companies don’t magically get money, they sell products and each wasted project eventually means additional figures on some price tag in a shop.
Then we have $125 billion which should have been $62 billion but another $62 billion was necessary to make it to the finishing line. It’s a harder to tell how much of that is wasted. You can’t count projects that were simply underestimated or feature creep – additional features cost additional money, so it’s out of budget but not wasted. Let’s assume $10 billion (less than 10% waste overall) in this group.
In a perfect world, that would mean we could spend 24% ($60 billion out of $250) more on software quality without any additional cost.
Related articles:
Dark Forest is a Fairy Tale
16. December, 2021The Dark Forest, an idea developed by Liu Cixin for his Remembrance of Earth’s Past series (also known for the first book “The Three-Body Problem” is just a fairy tale: Interesting to think about, there is a morale but it’s not based on reality.
Proof: We are still here.
The Dark Forest fairy tale is a solution to the Fermi paradox: If there are billions of planets like earth out there, where is everyone? The Dark Forest claims that every civilization that is foolish enough to expose itself gets gets wiped out.
Fact: We have exposed ourselves for millions of years now. Out planet has sent signals “lots of biological life here” for about 500 million years to anyone who cares.
Assuming that the Milky Way has a size of 100’000 light years, this means every civilization out there know about Earth for at least 499.9 million years. If they were out to kill us, we would be long gone by now. Why wait until we can send rockets to space if they are so afraid of any competition?
How would they know about us? We can already detect planets in other star systems (the current count at the writing of this article is 4604, see http://www.openexoplanetcatalogue.com/). In a few years, we’ll be able to tell all the planets close to us which can carry life, for values of close around 100 light years. A decade later, I expect that to work for any star system 1’000 light years around us. In a 100 years, I’ll expect scientists to come up with a trick to scan every star in our galaxy. An easy (if slow) way would be to send probes up and down out of the disk to get a better overview. Conclusion: We know a way to see every star system in the galaxy today. It’s only going to get better.
Some people worry that the technical signals we send could trigger an attack but those signals get lost in the background noise fairly quickly (much less that 100 light years). This is not the case for the most prominent signal: The amount of oxygen in Earth’s atmosphere. If you’re close to the plane of the ecliptic (i.e. when you look at the sun, the Earth will pass between you and the sun), you can see the Oxygen line in the star’s spectrum for thousands of light years. Everyone else has to wait until Earth moves in front of some background object.
There is no useful way to hide this signal. We could burn the oxygen, making Earth inhospitable. Or we could cover the planet with a rock layer; also not great unless you can live from a rock and salt water diet.
For an economical argument: When Ruanda invaded the Democratic Republic of Congo to get control of Coltan mining, they made roughtly $240 million/yr from selling the ore. China makes that much money by selling smart phones and electronics to other states every day (source: Home Deus by Yuval Harari). My take: killing other civilizations is a form of economical suicide.
Conclusion: The Dark Forest is an interesting thought experiment. As a solution for the Fermi paradox, I find it implausible.
Share this:
Like this: