Risks of Artificial Intelligence

10. November, 2016

There is a growing group of people arguing how AIs will one day kill us, either by loving or hating us to death. I find their arguments interesting but lacking an important factor: AI is created by (a few) humans.

That means AIs will inherit features from their creators:

  1. Humans make mistakes, so parts of the AI won’t do what they should.
  2. Each human defines “good” in a different way at a different time.
  3. The road to hell is paved with good intentions.

My addition to the discussion is thus: Even if we do everything “as right as possible”, the result will still be “surprising.”

Mistakes

Mistakes happen at all levels of software development. They can be made during the requirements phase, when the goals are set. Requirements often are vague, incomplete, missing or outright wrong.

Software developers then make mistakes, too. They misunderstand the requirements, they struggle with the programming language, their brain simply isn’t at the top of its abilities 100% of the time.

When it comes to AI, the picture gets even more muddled. Nobody knows what “AI” really is. If two people work on the same “AI” problem, their starting set of assumptions is very different.

In many cases, we use neural networks. Nobody really understands neural networks which is the key factor: They “learn” by themselves, even if we don’t know what exactly. So they come up with “solutions” without a lot of effort on the human side which is great. It “just works”. Many such projects failed because the neural networks tracks a spurious correlation – something that happens to us humans every day.

Good

What is “good“? Is it good when you add a feature to the software? When you’re really uneasy about it? When it’s immoral? Illegal? If it means keeping your job?

Is the success of a project good? What is “success”? It’s completed within time? Within budge? It’s somewhat completed at all? When the result is a rogue AI because too many corners were cut?

Unintentional Side Effects

The book “Avogadro Corp” tells the story of an AI which is created on purpose. The creator failed to take into account that he’s not alone. Soon, the AI acquired resources which it was never meant to have. People are killed, wars are prevented. Is that “success”?

Many people believe that strong leaders are “good” even when all the evidence says otherwise. They translate an insecurity into a wishful fact. If the wish of these people – often the majority – is granted, is that “good?” Is it good to allow a person to reject medicine which would save them because of personal belief? When all evidence suggests that the belief is wrong? Is it good to force happiness on people?

We want AIs to have an impact on the real world – avoid collisions with other people and cars, select the best medicine, make people spend more money on things they “need”, detect “abnormal” behavior of individuals in groups, kill enemies efficiently. Some of those goals are only “good” for a very small group of people. For me, that sounds like the first AIs won’t be created to serve humanity. The incentive just isn’t there.

Conclusion

AIs are built by flawed humans; humans who can’t even agree on a term like “good”. I feel that a lot of people trust AIs and computers because they are based on “math” and math is always perfect, right? Well, no, it’s not. In addition, the perceived perfection of math is diluted by greed, stupidity, lack of sleep and all the other human factors.

To make things worse, AIs are created to solve problems beyond the capability of humans. We use technologies to build them which we cannot understand. The goals to build AIs are driven by greed, fear, stupidity and hubris.

Looking back at history, my prediction is that the first AIs will probably be victim of the greatest mental human power: ignorance.


Technical Solutions to Amok Runs

3. August, 2016

Every now and then, an idiot realizes that his life isn’t exciting enough and decides to do something about it. Note: I apply humor to horror.

Some people (I think of them as idiots as well, just a different flavor) think that arming everyone is the best solution to this problem. Maybe these people probably never get angry.

Anyway. Here is my attempt at a solution: Data contracts.

A data contract is a contract which is attached to data.

Example: I could attach a contract to data which my cell phone produces, for example, “code looking for the signature of gunshots can access data which the microphone produces.” Similarly, I could attach “code looking symptoms of mass panic can access data from my mobile’s acceleration sensors.” And lastly, “code which detected mass panic or gunshots is allowed to access location data on my mobile.”

To build such a system, all data needs to be signed (so it can be attributed to someone) and it needs to contain the hash code of the contract. Big data services can then look up people by their signature (which would also allow to create a public / shared signature for an anonymous entity) and from there, get the data contracts.

Now that in itself doesn’t protect against abuse of data by greedy / evil corporations. The solution here is the same as in the “real” world: Auditing. People applying for access to this system need to undergo an audit where test data is fed into the system and auditors (which can be humans or bots or both) validate the operation. This results in a digital document signed by the auditors which will then allow them to access the data feeds.

This approach would then protect my privacy from people wanting my movement profiles to annoy me with adverts while safety services could still use the data to automatically detect disasters and dispatch help without me having to fumble for my phone while running for my life.

On the downside, attackers will start to shoot mobile phones.

If we look into the future, unstable people could be sentenced to share some of their data with automated systems which monitor their mental state – I’m positive that several companies are working on systems to determine the mental state of a person by looking at sensor data from their phones or fitness sensors as you read this. Of course, we’d need an improved justice system (our current one is too busy with things like patent lawsuits or copyright violations) with careful balance and checks to prevent another kind of idiot (the one which doesn’t believe in “everything has a cost”) to run amok with this (i.e. putting “unwanted” people into virtual jails).

There is a certain amount of “bad things happening” that we have to accept as inevitable. Everyone who disagrees is invited to move to North Korea where they have … ah … “solved” this already.

For everyone else, this idea has a few holes. It needs computer readable contracts, a way to negotiate contracts between computers (with and without human interaction), it needs technology for auditors where they can feed test data into complex systems and see where it goes.

I think the computer readable contracts will happen in the next few years; negotiating contracts and knowing what contracts you have is a big issue with companies. Their needs will drive this technology. Eventually, you’ll be able to set up a meeting with a lawyer who will configure a “contract matching app” your mobile. When some service wants your data, the app will automatically approve the parts of the contract which you already agree, and reject those which you’ll never accept. If the service still wants to do business with you, then you’ll get a short list of points which are undecided, yet. A few swipes later, you’ll be in business or you’ll know why not.

The test data problem can be implemented by adding new features to the big data processing frameworks. Many of these already have ways to describe data processing graphs which the framework will then turn into actual data processing. For documentation purposes, you can already examine those graphs. Adding signature tracking (when you already have to process the signatures anyway to read the data) isn’t a big deal. Auditing then means to check those signature tracks.

It’s not perfect but perfect doesn’t exist.


Good and Bad People

23. July, 2016

Good and bad people say “something needs to be done.” The difference is that bad people think “no matter the cost” while good people always keep in mind that change alone isn’t “good” as such. It can always be to the worse.

Related posts:


Random Conway’s Game of Life

27. December, 2015

Recently, I followed a discussion about free will. The starting point was the question whether a million exact clones which are placed in the same situations would shows the same behavior and whether they would diverge over time.

My stance is that they would behave identical in the beginning but, subject to quantum physics, small differences would creep in. Big things like hair color or beliefs would be very stable. A complex decision, which could go either way, might be influenced by the fact that a molecule binds a few nanoseconds later than in another clone. The neuron would fire slightly later than the other ones and a different option would be chosen.

Which made me remember Conway’s Game of Life. Life has been shown to be Turing complete – you can construct machines which can compute anything that can be computed.

Now which change to Life would bring it to the next level? Make it able to compute more than Turing?

If there is no metaphysical soul, no God-induced immortal energy in us, then our ability to comprehend must come from the physical body that we have. If neurons are small switches that trigger other switches when enough inputs agree, then where does comprehension – which simple computers certainly lack – come from?

Maybe the solution is that our neurons have a random component – quantum physics. Maybe the solution is a version of Life where survival with more than three neighbors isn’t impossible – just unlikely? Where cells can come to life from nothing by pure (small) chance?


Paris

16. November, 2015

The foundation of civilization is the ability of the community to withstand their own death wishes and murderous instincts — André Glucksmann (source; my own translation)

There are people who will tell you that it’s a dog-eat-dog world. That’s a white lie. The building in which you sit while you read this, is the result of cooperation of hundreds of thousands, maybe even millions of people. They dug the earth for ore and cement. They build trucks to transport them. They built factories to refine them and turn them into steel and tools. The process of smelting and forging steel has been developed by thousands of people over ten thousand years. Thousands of people all over the globe worked to build the device(s) which you use to read this.

Civilization is a result of cooperation by millions of people who have never met. Cooperation is the foundation on which we all stand. No bomb can change that – unless we allow ourselves to be manipulated by people that we despise.


Riddle

11. November, 2015

It takes years and hundreds, sometimes thousands of people to build but only one person and a moment to destroy me. What am I?

Answer (link goes to Wikipedia)


How Much do You Have to Hide?

16. September, 2015

When confronted with surveillance the usual reply is “nothing to hide.”

This answer is wrong. Let me tell you a story.

For over one hundred years, the city of Amsterdam had a census. They know your gender, relation ship status, number of children, parents, where you lived. All this information was used to make life better for everyone. And it worked. People were happy. The city government was efficient. It could base decisions on statistics and data instead of gut feelings. They were the first ones to use computers to efficiently store and handle the data.

May 10, 1940, the Nazis took the city. Suddenly, one bit of information – faith – decided over life and death. The Nazis took the data which had been collected and efficiently rounded up all the people they wanted to murder.

Surveillance is not about what you have to hide, it’s about how you can be hurt. It’s the question how much someone hiding in a faceless organization wants to ruin with your life.


The Quest to End Poverty

25. June, 2015

Poverty is a huge problem, even for those not affected. At best, the sight is disturbing, at worst, the sicknesses bred by many people crammed together don’t care much for bank accounts – even when it might help that you can pay doctor’s bills and meds.

In 2011, over $150 billion were spent on development aid. That sum sounds staggering if you look at the number alone. Keep in mind that the world’s GDP in 2011 was 75’621 billion (use the table view to see per country numbers) – aid is 0.2% of that. The US military budget alone was $610 billion. World-wide aid was just a quarter of what the US spends on its military.

What’s more, a lot of that money never leaves the donor country – it’s used to “pay” for debt which the receiving country already has towards the donor – or it’s vouchers for goods which the donor produces (like guns and other military equipment). As odd as that may sound at first: Development aid is often another tool to develop your own country. If it helps a struggling third world place, all the better.

But the problem runs deeper. Too deep to explain in a blog post but TED compiled a list of 11 through provoking talks how we could end poverty. My favorite didn’t make it into the list: Gary Haugen: The hidden reason for poverty the world needs to address now.

Think!


Trouble Sleeping?

21. May, 2015

There are people who are proud that they don’t need much sleep.

Don’t listen to them. People who don’t sleep enough make more mistakes, they are dumber than they could be, they ruin their health, their sex life, to name just a few of the most important downsides.

Yes, it’s easy to reduce the amount of time you’re sleeping every day and the negative effects aren’t obvious. People sleep just a few hours every night feel powerful and agitated – mostly because of the adrenaline levels you get from the stress of lack of sleep. But adrenaline also makes reckless and unreliable. It bends you towards risky behavior which causes accidents and disasters like the world financial crisis.

If you’re one of these people, stop it. The additional hours don’t really make you more productive, no matter how much you would like to make yourself believe. Your ruler is broken – a sleep-deprived brain isn’t able to notice just how tired it really is. And even at the best of times, it costs your company almost $2’000 every year for every employee.

Richard Wiseman has put together a short list of tips to help you sleep. Plus he has created the World’s Most Relaxing Music. One hour of sound that has been scientifically engineered to relax you.

Related:


Surveillance Produces Blackmail Instead of Security

1. March, 2015

They say that “good” people have nothing to hide and, therefore, nothing to fear from surveillance.

Everyone of us has something to hide. When we are confronted with out dark side, immediate, temporary loss of memory sets in and we say “I have nothing to hide” because we can’t remember on the spot. The source of this behavior isn’t “being good”, it’s peer pressure and guilt.

Everyone reading these lines has hidden something. Maybe you were not 100% honest when filing your last tax return. Or you lied to the police how many drinks you had. You lie to yourself when you’re speeding, thinking that you’re such a great driver, you can’t possible cause an accident. Maybe you had an affair, or a “harmless” flirt or maybe you visit a brothel. A few years ago, it was social suicide to let anyone, even your best friends, know that you’re homosexual. It still is in many parts of the world. In the “first world,” it’s what has happened during the last party, an awkward sickness, embarrassing thoughts, which odd web sites you’re visiting.

Everyone of us has something to hide. The average person, perfect in sync with the medium of society, is a myth.

People lose jobs over Twitter posts, party photos on Facebook. Some never get a job because of a criminal record or their family name. Police officers with access to surveillance equipment spy on their spouses or look into women’s bathrooms. Many partners of NSA agents were under surveillance without any official mandate.

Which brings us to the core of surveillance: The main product of surveillance isn’t security – it’s extortion.

When secret services pile up incriminating evidence against someone, they don’t tell the police. In most states, they aren’t allowed to. They keep it. For when it’s needed. When “someone” decides that “something” needs to be done and there is no legal way.

Not convinced? Well, if “nothing to hide” was true, then why do politicians, agencies and companies absolutely and firmly reject to let us see what they are doing? “Nothing to hide” is always only used as an argument to watch someone else. It implies “I have nothing to hide, so you don’t need to even try. Go away. Nothing to see here.” (Adam D. Moore, author of Privacy Rights: Moral and Legal Foundations, from “Nothing to hide argument“)

That’s why we need to be concerned about surveillance. We need to discuss what we want to achieve and what the costs are.

Do we want to make mass surveillance illegal? We could but we’d have to close down Google and Facebook.

Do we want total surveillance? Can we evolve all the societies on planet Earth to an extent where we can be honest with anyone about absolutely anything? Do we want to? How many people would get that killed?

Or do we have to strike a balance, find out how much surveillance is healthy, what the open and hidden costs are, how to control the people who use it – because it’s in the nature of most humans to do anything as long as they can get away with it.

It’s not a discussion many people want to have, we have so many things on our minds, but as usual: If we don’t make up our minds, someone else will do it for us. Only with out best interests in mind, of course.