Do not implicitly negate the step in the TrustRegionMinimizer.

In the TrustRegionMinimizer, the step is currently implicitly negated.
This is done so that the linearized residual is |r - J*step|^2, which
corresponds to J*step = r, so neither J nor r have to be modified.
However, it leads to the rather unintuitive situation that the strategy
returns a step in positive gradient direction, which you would expect to
increase the function value. One way is to rename the "step" parameter in
the strategy to "negative_step" and document it.
This patch instead moves the negation inside the strategy, just around
the linear solver call, so that it is done in a local context and easier
to document.

Change-Id: Idb258149a01f61c64e22128ea221c5a30cd89c89
diff --git a/internal/ceres/trust_region_minimizer.cc b/internal/ceres/trust_region_minimizer.cc
index d08255f..476cc9d 100644
--- a/internal/ceres/trust_region_minimizer.cc
+++ b/internal/ceres/trust_region_minimizer.cc
@@ -271,8 +271,8 @@
 
     double new_model_cost = 0.0;
     if (strategy_summary.termination_type != FAILURE) {
-      // new_model_cost = 1/2 |J * step - f|^2
-      model_residuals = -residuals;
+      // new_model_cost = 1/2 |f + J * step|^2
+      model_residuals = residuals;
       jacobian->RightMultiply(trust_region_step.data(), model_residuals.data());
       new_model_cost = model_residuals.squaredNorm() / 2.0;
 
@@ -328,7 +328,7 @@
       const double model_cost_change = max(kEpsilon, cost - new_model_cost);
 
       // Undo the Jacobian column scaling.
-      delta =  -(trust_region_step.array() * scale.array()).matrix();
+      delta = (trust_region_step.array() * scale.array()).matrix();
       iteration_summary.step_norm = delta.norm();
 
       // Convergence based on parameter_tolerance.