Artificial Ethics

24. October, 2017

While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?

Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.

The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.

Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).

Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.

It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).

Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.

That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.

Related articles:


Your Phone Should be Your Butler

18. October, 2017

A lot of people share private details with the world without being aware of it. For example, they take nude pictures with their phones (NSA keeps a copy, just in case) or they sell the phone without wiping it properly, allowing the next owner to get a good idea who you are, or they install apps like the one from Facebook which ask “can I do whatever I want with anything I find on your phone?” and people happily click the “Yeah, whatever” button (a.k.a “Accept”).

When people use modern technology, they have a mental model. That model tells them what to expect when they do something (“press here and the screen will turn on”). It also contains other expectations that are rooted in social behavior. Like “I take good care of my phone and it will take good care of me (and my data)”.

That, when you think about it, is nonsense.

A phone is not a butler. In essence, a phone is a personal data collecting device with additional communication capabilities. But the main goal is to learn about you and then manipulate you to buy stuff. It’s about money. Companies want it, you have it, they want you to give it to them. Anything else only exists to facilitate this process. If pain would increase revenue, we’d be living in hell.

Case in point: Speech based input. When you click on a page, that doesn’t tell much about you. When you use your finger, the phone can at least feel when you’re trembling. Are you angry or enthusiastic? We’re getting there. But your voice is rich with detail about your emotional state. More data to milk to make you perfect offers which you simply don’t want to refuse.

A butler, on the other hand, has your interests in mind. They keep private information private instead of selling it to the highest bidder. They look out for you.

The source of the difference? You pay a butler. (S)he is literally working for you. On the phone, a lot of people expect the same service to happen magically and for free. Wrong planet, pals.

Wouldn’t it be great if phones were like butlers? Trustworthy, discreet and helpful instead of just trying to be helpful?

I hope we’ll see more technology like the app Nude (which hides sensitive photos on your phone).

Related:


Risks of Artificial Intelligence

10. November, 2016

There is a growing group of people arguing how AIs will one day kill us, either by loving or hating us to death. I find their arguments interesting but lacking an important factor: AI is created by (a few) humans.

That means AIs will inherit features from their creators:

  1. Humans make mistakes, so parts of the AI won’t do what they should.
  2. Each human defines “good” in a different way at a different time.
  3. The road to hell is paved with good intentions.

My addition to the discussion is thus: Even if we do everything “as right as possible”, the result will still be “surprising.”

Mistakes

Mistakes happen at all levels of software development. They can be made during the requirements phase, when the goals are set. Requirements often are vague, incomplete, missing or outright wrong.

Software developers then make mistakes, too. They misunderstand the requirements, they struggle with the programming language, their brain simply isn’t at the top of its abilities 100% of the time.

When it comes to AI, the picture gets even more muddled. Nobody knows what “AI” really is. If two people work on the same “AI” problem, their starting set of assumptions is very different.

In many cases, we use neural networks. Nobody really understands neural networks which is the key factor: They “learn” by themselves, even if we don’t know what exactly. So they come up with “solutions” without a lot of effort on the human side which is great. It “just works”. Many such projects failed because the neural networks tracks a spurious correlation – something that happens to us humans every day.

Good

What is “good“? Is it good when you add a feature to the software? When you’re really uneasy about it? When it’s immoral? Illegal? If it means keeping your job?

Is the success of a project good? What is “success”? It’s completed within time? Within budge? It’s somewhat completed at all? When the result is a rogue AI because too many corners were cut?

Unintentional Side Effects

The book “Avogadro Corp” tells the story of an AI which is created on purpose. The creator failed to take into account that he’s not alone. Soon, the AI acquired resources which it was never meant to have. People are killed, wars are prevented. Is that “success”?

Many people believe that strong leaders are “good” even when all the evidence says otherwise. They translate an insecurity into a wishful fact. If the wish of these people – often the majority – is granted, is that “good?” Is it good to allow a person to reject medicine which would save them because of personal belief? When all evidence suggests that the belief is wrong? Is it good to force happiness on people?

We want AIs to have an impact on the real world – avoid collisions with other people and cars, select the best medicine, make people spend more money on things they “need”, detect “abnormal” behavior of individuals in groups, kill enemies efficiently. Some of those goals are only “good” for a very small group of people. For me, that sounds like the first AIs won’t be created to serve humanity. The incentive just isn’t there.

Conclusion

AIs are built by flawed humans; humans who can’t even agree on a term like “good”. I feel that a lot of people trust AIs and computers because they are based on “math” and math is always perfect, right? Well, no, it’s not. In addition, the perceived perfection of math is diluted by greed, stupidity, lack of sleep and all the other human factors.

To make things worse, AIs are created to solve problems beyond the capability of humans. We use technologies to build them which we cannot understand. The goals to build AIs are driven by greed, fear, stupidity and hubris.

Looking back at history, my prediction is that the first AIs will probably be victim of the greatest mental human power: ignorance.


Technical Solutions to Amok Runs

3. August, 2016

Every now and then, an idiot realizes that his life isn’t exciting enough and decides to do something about it. Note: I apply humor to horror.

Some people (I think of them as idiots as well, just a different flavor) think that arming everyone is the best solution to this problem. Maybe these people probably never get angry.

Anyway. Here is my attempt at a solution: Data contracts.

A data contract is a contract which is attached to data.

Example: I could attach a contract to data which my cell phone produces, for example, “code looking for the signature of gunshots can access data which the microphone produces.” Similarly, I could attach “code looking symptoms of mass panic can access data from my mobile’s acceleration sensors.” And lastly, “code which detected mass panic or gunshots is allowed to access location data on my mobile.”

To build such a system, all data needs to be signed (so it can be attributed to someone) and it needs to contain the hash code of the contract. Big data services can then look up people by their signature (which would also allow to create a public / shared signature for an anonymous entity) and from there, get the data contracts.

Now that in itself doesn’t protect against abuse of data by greedy / evil corporations. The solution here is the same as in the “real” world: Auditing. People applying for access to this system need to undergo an audit where test data is fed into the system and auditors (which can be humans or bots or both) validate the operation. This results in a digital document signed by the auditors which will then allow them to access the data feeds.

This approach would then protect my privacy from people wanting my movement profiles to annoy me with adverts while safety services could still use the data to automatically detect disasters and dispatch help without me having to fumble for my phone while running for my life.

On the downside, attackers will start to shoot mobile phones.

If we look into the future, unstable people could be sentenced to share some of their data with automated systems which monitor their mental state – I’m positive that several companies are working on systems to determine the mental state of a person by looking at sensor data from their phones or fitness sensors as you read this. Of course, we’d need an improved justice system (our current one is too busy with things like patent lawsuits or copyright violations) with careful balance and checks to prevent another kind of idiot (the one which doesn’t believe in “everything has a cost”) to run amok with this (i.e. putting “unwanted” people into virtual jails).

There is a certain amount of “bad things happening” that we have to accept as inevitable. Everyone who disagrees is invited to move to North Korea where they have … ah … “solved” this already.

For everyone else, this idea has a few holes. It needs computer readable contracts, a way to negotiate contracts between computers (with and without human interaction), it needs technology for auditors where they can feed test data into complex systems and see where it goes.

I think the computer readable contracts will happen in the next few years; negotiating contracts and knowing what contracts you have is a big issue with companies. Their needs will drive this technology. Eventually, you’ll be able to set up a meeting with a lawyer who will configure a “contract matching app” your mobile. When some service wants your data, the app will automatically approve the parts of the contract which you already agree, and reject those which you’ll never accept. If the service still wants to do business with you, then you’ll get a short list of points which are undecided, yet. A few swipes later, you’ll be in business or you’ll know why not.

The test data problem can be implemented by adding new features to the big data processing frameworks. Many of these already have ways to describe data processing graphs which the framework will then turn into actual data processing. For documentation purposes, you can already examine those graphs. Adding signature tracking (when you already have to process the signatures anyway to read the data) isn’t a big deal. Auditing then means to check those signature tracks.

It’s not perfect but perfect doesn’t exist.


Paris

16. November, 2015

The foundation of civilization is the ability of the community to withstand their own death wishes and murderous instincts — André Glucksmann (source; my own translation)

There are people who will tell you that it’s a dog-eat-dog world. That’s a white lie. The building in which you sit while you read this, is the result of cooperation of hundreds of thousands, maybe even millions of people. They dug the earth for ore and cement. They build trucks to transport them. They built factories to refine them and turn them into steel and tools. The process of smelting and forging steel has been developed by thousands of people over ten thousand years. Thousands of people all over the globe worked to build the device(s) which you use to read this.

Civilization is a result of cooperation by millions of people who have never met. Cooperation is the foundation on which we all stand. No bomb can change that – unless we allow ourselves to be manipulated by people that we despise.


How Much do You Have to Hide?

16. September, 2015

When confronted with surveillance the usual reply is “nothing to hide.”

This answer is wrong. Let me tell you a story.

For over one hundred years, the city of Amsterdam had a census. They know your gender, relation ship status, number of children, parents, where you lived. All this information was used to make life better for everyone. And it worked. People were happy. The city government was efficient. It could base decisions on statistics and data instead of gut feelings. They were the first ones to use computers to efficiently store and handle the data.

May 10, 1940, the Nazis took the city. Suddenly, one bit of information – faith – decided over life and death. The Nazis took the data which had been collected and efficiently rounded up all the people they wanted to murder.

Surveillance is not about what you have to hide, it’s about how you can be hurt. It’s the question how much someone hiding in a faceless organization wants to ruin with your life.


Balancing Security

3. October, 2014

For your IT security, you want

  • Security
  • It must be cheap
  • And comfortable

Now choose at most two.

As always in life, everything has a cost. There is no cheap way to be secure which is also comfortable. Home Depot chose “cheap” and “comfort” – you’ve seen the result. Mordac would prefer “secure” and “cheap“.

Those example show why the answer probably is “secure” and “comfortable”. Which means we’re facing two problems: “cheap” is out of the question and the two contradict each other. Secure passwords are long, hard to remember, contain lots of unusual characters (uncomfortable the first time you travel to a different country – yes, people there use different keyboard layouts). Turns out there is a “cheap” part in “comfortable”.

Taking this on a social level, the price for security is freedom. To quote Benjamin Franklin: “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” I don’t know about you but I feel bad about terrorists dictating us how much of our freedom we have to give up.

In a similar fashion, you can either punish criminals or prevent future crimes but you have to choose one. We have learned through bad experience (witch hunts, flaws of the US penal system) or good (like the Norwegian system) that punishment doesn’t always help nor does it make victims happy. Which leaves us with the only conclusion: We, as a society, pay money to prevent future crimes because that’s the most reasonable thing to do.

Even if it leads to people mistakenly attribute modern penal system as “holiday camps.”


Handicapped

3. May, 2014

Disabled people aren’t handicapped, they are getting obstructed.


How We See Things

1. February, 2014

We don’t see things how they are, but how we are.

As Sheldon from Big Bang Theory said: “Text adventures run on the world’s most powerful graphics chip: Imagination!

Everything you see or hear happens in your brain.

Think about it.

That insult that really hurt? Only in your brain.

Interesting, isn’t it?

Just beware of the “everything is my fault” concept. There is no point in trying to take responsibility for everything.


Justice with Michael Sandel

5. May, 2013

Justice, even more than money, is a key motivator for people. This is true for simple experiments, like the Ultimatum Game, and big topics, like the global financial crisis of 2007/08.

Michael Sandel teaches political philosophy at Harvard University and he routinely attracts thousands of listeners.

Sandel asks questions like “If you had to choose between (1) killing one person to save the lives of five others and (2) doing nothing, even though you knew that five people would die right before your eyes if you did nothing—what would you do?”

Or: “The tickets for my lectures are free but you have to get them because so many people want to attend. Now some people have started to pay money for someone to stay in line for them so they can attend for sure. Is that ethical?”

You can find videos of his lectures on the web site above. Here, I’ll collect a couple of important quotes from an interview he gave to Sternstunden (Swiss Radio and TV).

Most important point: Adding a financial incentive changes the meaning of a social practice. This is in contrast to the common belief that economics is neutral towards ethics.

Note: This is a loose translation how I understood him, not what he actually said.

  • The world has become more rich but the money is distributed unevenly. In recent years, the gap has widened and this places many difficult questions about justice.
  • The widening gap forces politicians to decide what a just world could be. It’s a necessity to discuss these questions in public life.
  • Taxes are collected to benefit the common good and to alleviate inequalities. If some people move their money to low-tax states, they’re opting out of the civic responsibilities. This isn’t only unjust, it’s also problematic because it allows to most rich and influential members of a community to “outsource” some of their duties (while they still very much want to control said community).
  • Justice and democracy are connected. It’s unfair when many people work hard and invest a lot of effort but some of them get a better pay. If this gap widens, it undermines the public spirit, the feeling that “we’re all in the same boat.” This feeling is one of the pillars of democracy. When the public spirit is undermined, democracy erodes.
  • Is it OK when a funds manager makes more money than a teacher? The market theory of laissez-faire says yes. But what if the results of a funds manager are purely luck? What if monkeys can beat them? Or a 64-year old housewife?
  • Financial incentives were created to make people invest in the common good – this is the philosophical basis for the appeal of incentives. Does a funds manager, who makes 1’000 times more money than a teacher, also contribute 1’000 times  more to the common good? If this can’t be proven, how can someone argue that the hedge funds manager deserves to keep all his income?
  • Book: “What Money Can’t Buy” (Amazon.com) Questions from the description: “Should we pay children to read books or to get good grades? Should we allow corporations to pay for the right to pollute the atmosphere? Is it ethical to pay people to test risky new drugs or to donate their organs? What about hiring mercenaries to fight our wars? Auctioning admission to elite universities? Selling citizenship to immigrants willing to pay?[…] how can we prevent market values from reaching into spheres of life where they don’t belong? What are the moral limits of markets? Without quite realizing it, Sandel argues, we have drifted from having a market economy to being a market society. Is this where we want to be? how can we protect the moral and civic goods that markets don’t honor and that money can’t buy?”
  • An important discussion that didn’t happen in the last decades is where the market benefits the common good and where they corrupt non-market values worth caring about.
  • Many countries don’t allow to sell organs on the free market. Reasons: Pool people could be forced to sell their organs to the rich. It’s doubtful that a pool farmer from India would sell his organs voluntarily if that’s the only way to pay for the education of his children. Or how about making children just to butcher them for their organs? But there is a second reason: Do we want people to think of their bodies as a collection of spare parts than can be sold for a profit? Wouldn’t that degrade a human person? Also, there are always risks when donating organs: Something can go wrong during the operation (scars, infections, death), if you donate a kidney, you only have one left which creates a greater risk for you later.
  • Markets only work when they are free. We always need to make sure they aren’t driven by forces like extreme poverty or would this debase/corrupt an important ethical value?
  • In Iraq and Afghanistan, more mercenaries from private companies served than US soldiers. There was no public debate whether we actually wanted this. Rousseau argued against this practice because it’s like outsourcing a civic obligation. This undermines national security, civic duties and democratic values.
  • Public Theater in New York plays Shakespeare in the Park. It’s a free Shakespeare play in the Central Park. Rich people pay homeless to stand in line which perverts the intent of the event: It’s to allow poor people to enjoy high-class culture. It puts a price tag on a free commodity. It also changes the audience and hence the public character of the event.
  • Something similar happens in Washington, D.C. Companies offer to stand in line for tickets for Congress hearings or important decisions of the Supreme Court. This means lobbyists can make sure they will have a seat in the law making process.
  • Both examples corrode democratic values; the latter one is just more obvious. But in both cases, commons, owned by all, are price tagged by a few and forced into a market system that the majority doesn’t want and which benefits only a few – if at all.
  • Laax offered visitors of their skiing resort VIP passes which allowed owners to skip past waiting lines. Half of the people asked for their opinion didn’t like this; they said it was part of skiing to wait in line. The other half found it OK. Notable: Laax only offered only 10 such passes each day and that one of them was only CHF 30,- more than the standard pass. According to M. Sandel, this is a slightly different situation: Slopes aren’t public areas. You’re paying for access anyway.
  • Airports offer fast lanes for passengers who pay extra for their ticket. Part of the service is early boarding and more room for hand luggage. This is OK since the airline sells a service and amenity. But how about the right for a quicker security check? Boarding early is a commodity – in-flight safety isn’t.
  • These examples show how market values/practices (in contrast to moral values/practices) have become more important in the last 30 years.
  • Politics should have a discussion about the moral limits, the question where markets belong and where they don’t, where they display, undermine or destroy moral or social values.
  • In the last 30 years, a pseudo religion has grown around the holy market. The core belief is that markets can define what’s fair and right for the common good. M. Sandel thinks this is a mistake. There must be an important relation between market and morals: Markets are tools. They don’t define justice nor the public good. They are useful to organize production processes and to distribute goods and one can discuss what democratic goals they serve. But they are just instruments. Therefore, the use of markets must be controlled by moral values and legal considerations.
  • Markets are great to distribute goods like TVs, cars, etc. They are dangerous when applied in the context of family life, health, public life, raising children, education and national security.
  • Only by discussion these questions, we can find out where markets are useful.
  • Theory of Justice by John Rawls, shared by Jürgen Habermas, based on Immanuel Kant: We can’t agree what is a good life, what are virtues and how we should value goods. Therefore, we have to find a way to decide matters of justice and what’s goodness without being biased by our prejudices. It’s one reason why the law and the government should be neutral towards gender, religion, sexual orientation, etc.
  • One reason for this is to avoid endless discussions about what’s “good”. Pluralistic societies don’t want to force values on other people when they don’t share the same view.
  • Unfortunately, there is no way to define justice when members of society have contradicting views: 1. There is no way to make law while ignoring the underlying moral controversy. 2. Trying to do so creates hollow politics, it leads to public discussions without depth nor goal. Technocratic discussions don’t inspire and therefore, people are frustrated. They feel that politics isn’t paying attention to the big questions. That’s why people should stand up for their moral and even spiritual beliefs and discuss them in public.
  • It’s impossible to separate justice from the public good.
  • Political parties try to avoid discussing moral issues because of the controversies.
  • We will have to come up with ways and places where we can discuss our moral views and values, justice in the civil society, social movements, the media and higher educations.
  • Young people should be raised to be able to discuss complicated ethical questions.
  • There is always a danger that a group takes control of such discussions. But democracy is always a risk. There simply is no way to avoid that the majority will get it wrong. To solve this, no decision can ever be fixed and frozen once and for all. It must always be possible to revert it later.
  • To teach philosophy, M. Sandel always invites students to discusses with him. It not only raises the attention of the students, it also allows to include current topics in the discussion. That’s how political and moral philosophy always worked: By dialogue, discussion, by challenging assumptions.
  • Some philosophers write books that are technical, abstract and even obscure. While it’s important that they exist and tackle their topics, an equally important part of philosophy must care for the world and society. This is especially true for moral and political philosophies.
  • Sandel himself is sometimes confronted with the problem that tickets for his lectures are sold on the black market. He uses this as a topic to kick off a discussion with the students.
  • In a kindergarten  the caretakers found themselves always waiting for parents to pick up their children. To improve the situation, they fined the lazy parents. But this backfired: Since the parents considered this as a “service fee”, even more parents were late. Important issue here: Adding a financial incentive can change the meaning of a situation.
  • In the kindergarten example, the parent felt guilty. When the financial incentive was introduced, the expectation was that demand drops when the price rises. But parents suddenly felt like they were paying for an additional service.
  • Many economic experts believe that this doesn’t happen. The reason for this is that it’s correct for material goods. A Flat-screen TV behaves the same, no matter at which price it’s being sold. The price doesn’t change the product. But money can change the behavior of products which depend on certain attitudes and norms.
  • Example: Speech of the father of the bride. He can write the speech himself or download it from the Internet or buy a professional to write one for him. One could argue that a good speech makes the father sleep better, it’s not embarrassing for the bridal pair. If you’re president or premier minister, then it’s not a big deal since everyone knows that these people don’t write their own speeches. But let’s assume the father gives a deeply moving, emotional, warm speech. Everyone is moved to tears. And later, people learn that he bought that speech online for $149. How would you feel if it was your father?
  • Example: Spiderman cake for birthday party. During the party, the mother confesses that she didn’t make it because the child didn’t like the design – it wasn’t “Spiderman” enough – so she bought one. Who is at fault? Would it be good if the mother had taught her son to value the work that went into her cake? Or was it wise to give in, depending on the age of the child? In this case, the decision probably had no negative impact. But let’s assume this was a project from the Mother and the other siblings to bake a cake for their brother. After much work, he doesn’t like it and asks to buy a “real” Spiderman cake. It’s easy to imagine that this could be negative for the family relations. The important question is which values, virtues and morals are involved and how buying a professional cake could corrupt them.
  • An example where people refuse the market is the municipality of Wolfenschiessen in the canton of Nidwalden in Switzerland. For 25 years, they are debating whether they should allow a terminal storage for nuclear waste in their area. 1993, a poll by Bruno Frey (PDF, German) showed that 50.8% were willing to accept such a dump. Offering a considerable financial compensation reduced acceptance to 24.6% (page 10). Without compensation, people felt it was their civic duty to take this burden. But the money smelt like a bribe. They were willing to accept a risk for the public good but they weren’t willing to sell the safety of their families and children.
  • It’s important to return economics to its roots. In the times of Adam Smith (18th century), the lecture was named “ethical and political economics”. The many great economists were always thinking how society can benefit best from economics (note: Karl Marx was a philosopher, not an economist). Before the 20th century, economics was always part of philosophy. Only recent decades, it has given itself a semblance of being stand-alone and neutral.
  • The most important things money can’t buy: Love, family, friends.

Related links: