Lint changes from William Rucklidge. Change-Id: I6592b61451ead8f0407bec134fcf4b56ba22ffb9
diff --git a/docs/source/gradient_solver.rst b/docs/source/gradient_solver.rst index 1ef1515..ba7cbae 100644 --- a/docs/source/gradient_solver.rst +++ b/docs/source/gradient_solver.rst
@@ -231,7 +231,7 @@ In each iteration of the line search, - .. math:: \text{new_step_size} >= \text{max_line_search_step_contraction} * \text{step_size} + .. math:: \text{new_step_size} \geq \text{max_line_search_step_contraction} * \text{step_size} Note that by definition, for contraction: @@ -243,7 +243,7 @@ In each iteration of the line search, - .. math:: \text{new_step_size} <= \text{min_line_search_step_contraction} * \text{step_size} + .. math:: \text{new_step_size} \leq \text{min_line_search_step_contraction} * \text{step_size} Note that by definition, for contraction: @@ -260,7 +260,7 @@ As this is an 'artificial' constraint (one imposed by the user, not the underlying math), if ``WOLFE`` line search is being used, *and* points satisfying the Armijo sufficient (function) decrease - condition have been found during the current search (in :math:`<=` + condition have been found during the current search (in :math:`\leq` ``max_num_line_search_step_size_iterations``). Then, the step size with the lowest function value which satisfies the Armijo condition will be returned as the new valid step, even though it does *not* @@ -289,7 +289,7 @@ decreases sufficiently. Precisely, this second condition is that we seek a step size s.t. - .. math:: \|f'(\text{step_size})\| <= \text{sufficient_curvature_decrease} * \|f'(0)\| + .. math:: \|f'(\text{step_size})\| \leq \text{sufficient_curvature_decrease} * \|f'(0)\| Where :math:`f()` is the line search objective and :math:`f'()` is the derivative of :math:`f` with respect to the step size: :math:`\frac{d f}{d~\text{step size}}`. @@ -304,7 +304,7 @@ satisfying the conditions is found. Precisely, at each iteration of the expansion: - .. math:: \text{new_step_size} <= \text{max_step_expansion} * \text{step_size} + .. math:: \text{new_step_size} \leq \text{max_step_expansion} * \text{step_size} By definition for expansion @@ -327,7 +327,7 @@ Solver terminates if - .. math:: \frac{|\Delta \text{cost}|}{\text{cost}} <= \text{function_tolerance} + .. math:: \frac{|\Delta \text{cost}|}{\text{cost}} \leq \text{function_tolerance} where, :math:`\Delta \text{cost}` is the change in objective function value (up or down) in the current iteration of the line search. @@ -338,7 +338,7 @@ Solver terminates if - .. math:: \|x - \Pi \boxplus(x, -g(x))\|_\infty <= \text{gradient_tolerance} + .. math:: \|x - \Pi \boxplus(x, -g(x))\|_\infty \leq \text{gradient_tolerance} where :math:`\|\cdot\|_\infty` refers to the max norm, :math:`\Pi` is projection onto the bounds constraints and :math:`\boxplus` is @@ -351,7 +351,7 @@ Solver terminates if - .. math:: \|\Delta x\| <= (\|x\| + \text{parameter_tolerance}) * \text{parameter_tolerance} + .. math:: \|\Delta x\| \leq (\|x\| + \text{parameter_tolerance}) * \text{parameter_tolerance} where :math:`\Delta x` is the step computed by the linear solver in the current iteration of the line search.
diff --git a/include/ceres/jet.h b/include/ceres/jet.h index 96b6dcf..d84dca3 100644 --- a/include/ceres/jet.h +++ b/include/ceres/jet.h
@@ -576,11 +576,12 @@ // We have various special cases, see the comment for pow(Jet, Jet) for // analysis: // -// 1. For a > 0 we have: (a)^(p + dp) ~= a^p + a^p log(a) dp +// 1. For f > 0 we have: (f)^(g + dg) ~= f^g + f^g log(f) dg // -// 2. For a == 0 and p > 0 we have: (a)^(p + dp) ~= a^p +// 2. For f == 0 and g > 0 we have: (f)^(g + dg) ~= f^g // -// 3. For a < 0 and integer p we have: (a)^(p + dp) ~= a^p +// 3. For f < 0 and integer g we have: (f)^(g + dg) ~= f^g but if dg +// != 0, the derivatives are not defined and we return NaN. template <typename T, int N> inline Jet<T, N> pow(double f, const Jet<T, N>& g) { @@ -607,37 +608,37 @@ // pow -- both base and exponent are differentiable functions. This has a // variety of special cases that require careful handling. // -// 1. For a > 0: (a + da)^(b + db) ~= a^b + a^(b - 1) * (b*da + a*log(a)*db) -// The numerical evaluation of a*log(a) for a > 0 is well behaved, even for +// 1. For f > 0: (f + df)^(g + dg) ~= f^g + f^(g - 1) * (g * df + f * log(f) * dg) +// The numerical evaluation of f * log(f) for f > 0 is well behaved, even for // extremely small values (e.g. 1e-99). // -// 2. For a == 0 and b > 1: (a + da)^(b + db) ~= 0 -// This cases is needed because log(0) can not be evaluated in the a > 0 -// expression. However the function a*log(a) is well behaved around a == 0 -// and its limit as a-->0 is zero. +// 2. For f == 0 and g > 1: (f + df)^(g + dg) ~= 0 +// This cases is needed because log(0) can not be evaluated in the f > 0 +// expression. However the function f*log(f) is well behaved around f == 0 +// and its limit as f-->0 is zero. // -// 3. For a == 0 and b == 1: (a + da)^(b + db) ~= 0 + da +// 3. For f == 0 and g == 1: (f + df)^(g + dg) ~= 0 + df // -// 4. For a == 0 and 0 < b < 1: The value is finite but the derivatives are not. +// 4. For f == 0 and 0 < g < 1: The value is finite but the derivatives are not. // -// 5. For a == 0 and b < 0: The value and derivatives of a^b are not finite. +// 5. For f == 0 and g < 0: The value and derivatives of f^g are not finite. // -// 6. For a == 0 and b == 0: The C standard incorrectly defines 0^0 to be 1 +// 6. For f == 0 and g == 0: The C standard incorrectly defines 0^0 to be 1 // "because there are applications that can exploit this definition". We // (arbitrarily) decree that derivatives here will be nonfinite, since that -// is consistent with the behavior for a==0, b < 0 and 0 < b < 1. Practically +// is consistent with the behavior for f == 0, g < 0 and 0 < g < 1. Practically // any definition could have been justified because mathematical consistency // has been lost at this point. // -// 7. For a < 0, b integer, db == 0: (a + da)^(b + db) ~= a^b + b * a^(b - 1) da -// This is equivalent to the case where a is a differentiable function and b +// 7. For f < 0, g integer, dg == 0: (f + df)^(g + dg) ~= f^g + g * f^(g - 1) df +// This is equivalent to the case where f is a differentiable function and g // is a constant (to first order). // -// 8. For a < 0, b integer, db != 0: The value is finite but the derivatives are -// not, because any change in the value of b moves us away from the point +// 8. For f < 0, g integer, dg != 0: The value is finite but the derivatives are +// not, because any change in the value of g moves us away from the point // with a real-valued answer into the region with complex-valued answers. // -// 9. For a < 0, b noninteger: The value and derivatives of a^b are not finite. +// 9. For f < 0, g noninteger: The value and derivatives of f^g are not finite. template <typename T, int N> inline Jet<T, N> pow(const Jet<T, N>& f, const Jet<T, N>& g) {