Restore the semantics of TrustRegionMinimizer

A previous change to fix a bug related to how x_norm was being
initialized changed the semantics of the TrustRegionMinimizer
loop. With the bug, the parameter_tolerance was being ignored
till the first successful step was encountered.

parameter_tolerance based convergence is a hack anyways, so
restoring the previous semantics is reasonable and will preserve
existing code/tests that depend on this behaviour.

Change-Id: Ia00ca6c47f77e74dee64ad746f4299a9eab0eb7e
diff --git a/internal/ceres/trust_region_minimizer.cc b/internal/ceres/trust_region_minimizer.cc
index 60f6b62..800b676 100644
--- a/internal/ceres/trust_region_minimizer.cc
+++ b/internal/ceres/trust_region_minimizer.cc
@@ -78,6 +78,7 @@
           ? options_.max_consecutive_nonmonotonic_steps
           : 0);
 
+  bool atleast_one_successful_step = false;
   while (FinalizeIterationAndCheckIfMinimizerCanContinue()) {
     iteration_start_time_in_secs_ = WallTimeInSeconds();
 
@@ -105,7 +106,7 @@
     ComputeCandidatePointAndEvaluateCost();
     DoInnerIterationsIfNeeded();
 
-    if (ParameterToleranceReached()) {
+    if (atleast_one_successful_step && ParameterToleranceReached()) {
       return;
     }
 
@@ -114,6 +115,7 @@
     }
 
     if (IsStepSuccessful()) {
+      atleast_one_successful_step = true;
       RETURN_IF_ERROR_AND_LOG(HandleSuccessfulStep());
     } else {
       // Declare the step unsuccessful and inform the trust region strategy.