There is a growing group of people arguing how AIs will one day kill us, either by loving or hating us to death. I find their arguments interesting but lacking an important factor: AI is created by (a few) humans.
That means AIs will inherit features from their creators:
- Humans make mistakes, so parts of the AI won’t do what they should.
- Each human defines “good” in a different way at a different time.
- The road to hell is paved with good intentions.
My addition to the discussion is thus: Even if we do everything “as right as possible”, the result will still be “surprising.”
Mistakes
Mistakes happen at all levels of software development. They can be made during the requirements phase, when the goals are set. Requirements often are vague, incomplete, missing or outright wrong.
Software developers then make mistakes, too. They misunderstand the requirements, they struggle with the programming language, their brain simply isn’t at the top of its abilities 100% of the time.
When it comes to AI, the picture gets even more muddled. Nobody knows what “AI” really is. If two people work on the same “AI” problem, their starting set of assumptions is very different.
In many cases, we use neural networks. Nobody really understands neural networks which is the key factor: They “learn” by themselves, even if we don’t know what exactly. So they come up with “solutions” without a lot of effort on the human side which is great. It “just works”. Many such projects failed because the neural networks tracks a spurious correlation – something that happens to us humans every day.
Good
What is “good“? Is it good when you add a feature to the software? When you’re really uneasy about it? When it’s immoral? Illegal? If it means keeping your job?
Is the success of a project good? What is “success”? It’s completed within time? Within budge? It’s somewhat completed at all? When the result is a rogue AI because too many corners were cut?
Unintentional Side Effects
The book “Avogadro Corp” tells the story of an AI which is created on purpose. The creator failed to take into account that he’s not alone. Soon, the AI acquired resources which it was never meant to have. People are killed, wars are prevented. Is that “success”?
Many people believe that strong leaders are “good” even when all the evidence says otherwise. They translate an insecurity into a wishful fact. If the wish of these people – often the majority – is granted, is that “good?” Is it good to allow a person to reject medicine which would save them because of personal belief? When all evidence suggests that the belief is wrong? Is it good to force happiness on people?
We want AIs to have an impact on the real world – avoid collisions with other people and cars, select the best medicine, make people spend more money on things they “need”, detect “abnormal” behavior of individuals in groups, kill enemies efficiently. Some of those goals are only “good” for a very small group of people. For me, that sounds like the first AIs won’t be created to serve humanity. The incentive just isn’t there.
Conclusion
AIs are built by flawed humans; humans who can’t even agree on a term like “good”. I feel that a lot of people trust AIs and computers because they are based on “math” and math is always perfect, right? Well, no, it’s not. In addition, the perceived perfection of math is diluted by greed, stupidity, lack of sleep and all the other human factors.
To make things worse, AIs are created to solve problems beyond the capability of humans. We use technologies to build them which we cannot understand. The goals to build AIs are driven by greed, fear, stupidity and hubris.
Looking back at history, my prediction is that the first AIs will probably be victim of the greatest mental human power: ignorance.
Like this:
Like Loading...
Artificial Ethics
24. October, 2017While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?
Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.
The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.
Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).
Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.
It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).
Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.
That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.
Related articles:
Share this:
Like this: