While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?
Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.
The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.
Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).
Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.
It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).
Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.
That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.
When Peter F. Hamilton wrote about the Edenists growing space stations out of asteroids by planting an artificial, genetic-engineered egg on it, it was science fiction.
Carl de Smet found a way to make foam form into a chair when heated in an oven. The next step in the design is to make the surface of the chair re-mold itself at body temperature – the chair will deform to adjust to the shape of the person sitting on it.
Dave Thomas gave the keynote speech (link) about how technology seems to change all around us just to show up the same, old problems over and over again.
Things that I took home:
In the future, queries will me more important than languages
How often were you irritated by how source code from someone else looked? I don’t mean sloppy, I mean indentation or how they place spaces and braces. In 2012, it should be possible to separate the model (source code) from the view (text editor) – why can’t my IDE simply show me the source in the way that I like and keep the source code in a nice, common format? (see Bug 45423 – [formatting] Separate presentation from formatting)
And how often have you wondered “Which parts of the code call Locale.getDefault() either directly or indirectly?”
How often did you need to make a large-scale change in the source code which could have been done with a refactoring in a few minutes – but writing the refactoring would have taken days because there simply are no tools to quickly write even simple refactorings in your IDE?
Imagine this: You can load the complete AST of your source code into a NoSQL database. And all the XML files. And the configuration files. And the UML. And everything else. And then create links between those parts. And query those links … apply a piece of JavaScript to each matching node …
Customers of my applications always want new business reports every day. It takes way too long to build these reports. It takes too much effort to change them. And it’s impossible to know whether a report is correct because no one can write test cases for them.
Some concepts are easier to understand when you see a graph. With HTML5, a lot of data and a bit of time, you can build a system that can visualize things like “live span over number of children” or the sources of energy over time. Interesting that nuclear power, renewable energy and energy from crude oil is about the same level today.
This view seems to indicate that the GDP of a country starts to kick off as soon as each family has two or fewer children. But maybe it just shows that after financial constructs like derivatives and bonds allow to accumulate enormous wealth in a short time. “Correlation does not imply causation“
Artificial Ethics
24. October, 2017While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?
Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.
The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.
Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).
Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.
It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).
Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.
That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.
Related articles:
Share this:
Like this: