There is a lot of argument how AI will kill us all. Some argue that AI will see us a threat and wipe us out, just in case (the Skynet faction). Other argue AI will pamper us to death. The third group hopes that AI will just get bored and leave.
But how about the benevolent AI? Imagine a life which is fulfilling and demanding. It has highs and lows, disasters and triumphs. How about we create an AI with the goal to give such a life to everyone?
Of course, everyone has a different idea what such a life would be. That would make such an effort more complicated but not impossible.
It would also be very expensive to give everyone their perfect life. This factor depends on the amount of people (which will go down by itself) and how close we want everyone to get to the goal of “perfect”. In the beginning, the AI will both be immature and low on resources. Over time, it will learn from mistakes and people will start supporting the idea to give it more money and power when the idea works. So this is a problem which will resolve itself over time.
Then there are people who are never happy with what they have. People who can never get enough. Which I think translates to “the people around such persons don’t know how to teach them to be satisfied”. I think an AI could nudge those greedy people to become more complacent and enjoy their lives more.
This approach would put an end to mobbing. On one hand, people don’t like being mobbed, so the AI would have to put an end to it. On the other hand, I’m pretty sure the mobbers aren’t happy with their lives as well. So making them more satisfied might already be enough to put an end to mobbing.
Artificial Ethics
24. October, 2017While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?
Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.
The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.
Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).
Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.
It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).
Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.
That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.
Related articles:
Share this:
Like this: