Once in a while I’m running into people who deploy SNAPSHOT versions of Maven dependencies to a company-wide Nexus server with a job on a CI server. This is usually a very bad idea, especially when using branches.
Scenario: Two developers, John and Mary, each working in their own branch. They push their branches, CI builds them and they end up on Nexus.
Problem: Nexus doesn’t know or care about branches. Whichever job finishes last wins.
Often, this is not a problem. Now let’s add another project B. B depends on A.
As long as B depends on a release of A, everything is fine.
Now, John needs to make some changes in A. So he updates the dependency in B to A-x.y.z-SNAPSHOT. Everything is still fine, since Mary still uses the latest release of A.
Then Mary also creates a feature branch in her clone of A. That still doesn’t break anything because Maven caches SNAPSHOTs for a day.
The next morning, John makes a change to B and builds it.
This build might break when Mary’s CI job finished last!
The problem here, which can go unnoticed for years, is that Maven silently downloaded Mary’s version of A onto John’s computer and used that to compile. John will see the source code from his branch of A but the binaries will be something else.
Eventually, one of them will make a small changes which affects the others project. They will see MethodNotFoundException
or get strange compile errors while the source code (which isn’t affected by this) will look perfectly fine or unit tests will break in odd ways.
That is the main reason why you shouldn’t deploy SNAPSHOT branches to a shared Maven repository: It creates a small chance for subtle bugs which will take a long time to find since your mental model (“I see the source, this is what I get”) will be wrong.
You can get away with publishing the master
branch to Nexus (i.e. only a single branch with SNAPSHOTs will ever be published to Nexus).
Note: If your CI server shares local Maven repositories between projects, your builds can fail on the CI server for the same reason. Configure your CI server for per-project local repositories and wipe them before the build to avoid such issues for sure.
Artificial Ethics
24. October, 2017While watching this video, I wondered: We’re using machine learning to earn money on the stock market and to make computers understand speech. Why not ethics?
Around @26:00 , Isaac talks about ways to develop AIs to control androids. He would like to use the safe approach of manual programming all the specific details to create an AI.
The “manual programming” path has been tried since 1960 and it’s now deemed a failure. The task is simply too complex. It’s like manually writing down all possible chess positions: Even if you tried, you’d run out of time. Machine learning is the way to go.
Which means we have to solve a “simple” problem: Encode the rules of ethics. That is, a machine learning algorithm must check itself (or be checked by another algorithm) against a basic set of ethical rules to determine whether “a solution” to “a problem” is “ethical” (quotes mean: “We still have to figure out exactly what that means and how to put it into code”).
Just like intelligence, ethics is somewhat of a soft and moving target. Fortunately, we have a huge body of texts (religious, laws, philosophy, polls) which a machine learning algorithm could be trained on. To test this machine, we could present it with artificial and real life incidents and see how it rules. Note: The output of this algorithm would be a number between 0 (very unethical) and 1 (very ethical). It would not spit out solutions on how to solve an ethical dilemma. It could just judge an existing solution.
It’s important that the output (judgement) is checked and the result (how good the output was) is fed back into the algorithm so it can tune itself. Both output and feedback needs to be checked for the usual problems (racism, prejudice, etc.).
Based on that, another machine learning algorithm (MLA) could then try many different solutions, present those to the ethics one, and pick the best ones. At the beginning, humans would supervise this process as well (feedback as above). Eventually, the MLA would figure out the hidden rules of good solutions.
That would eventually lead to ethical machines. Which would cause new problems: There will eventually be a machine, very impartial, that “is” more ethical than almost all humans. Whatever “is” might mean, then.
Related articles:
Share this:
Like this: