Various corrections and enhancements to the documentation.
Change-Id: I03519bfccf4367b36d36006f1450d5fbcbbf8621
diff --git a/docs/source/building.rst b/docs/source/building.rst
index 8229cc4..17e17d4 100644
--- a/docs/source/building.rst
+++ b/docs/source/building.rst
@@ -60,7 +60,7 @@
8. `protobuf <http://code.google.com/p/protobuf/>`_ is used for
serializing and deserializing linear least squares problems to
disk. This is useful for debugging and testing. It is an optional
-depdendency and without it some of the tests will be disabled.
+dependency and without it some of the tests will be disabled.
.. _section-linux:
@@ -215,7 +215,7 @@
On Windows, we support building with Visual Studio 2010 or newer. Note
that the Windows port is less featureful and less tested than the
-Linux or Mac OS X versions due to the unavaliability of SuiteSparse
+Linux or Mac OS X versions due to the unavailability of SuiteSparse
and ``CXSparse``. Building is also more involved since there is no
automated way to install the dependencies.
diff --git a/docs/source/introduction.rst b/docs/source/introduction.rst
index 835a064..e6f65e5 100644
--- a/docs/source/introduction.rst
+++ b/docs/source/introduction.rst
@@ -37,11 +37,10 @@
c. Specialized solvers for bundle adjustment problems in computer
vision.
- d. Iterative linear solvers with perconditioners for general sparse
+ d. Iterative linear solvers with preconditioners for general sparse
and bundle adjustment problems.
-#. Portable: Runs on Linux, Windows, Mac OS X and Android. An iOS port is
- underway.
+#. Portable: Runs on Linux, Windows, Mac OS X and Android.
At Google, Ceres Solver has been used for solving a variety of
diff --git a/docs/source/modeling.rst b/docs/source/modeling.rst
index f92c4c9..93098f1 100644
--- a/docs/source/modeling.rst
+++ b/docs/source/modeling.rst
@@ -4,9 +4,9 @@
.. _`chapter-modeling`:
-============
-Modeling API
-============
+========
+Modeling
+========
Recall that Ceres solves robustified non-linear least squares problems
of the form
@@ -29,18 +29,21 @@
non-linear least squares problems.
In this chapter we will describe the various classes that are part of
-Ceres Solver's modeling API, and how they can be used to construct
-optimization.
-
-Once a problem has been constructed, various methods for solving them
-will be discussed in :ref:`chapter-solving`. It is by design that the
-modeling and the solving APIs are orthogonal to each other. This
-enables easy switching/tweaking of various solver parameters without
-having to touch the problem once it has been successfuly modeling.
+Ceres Solver's modeling API, and how they can be used to construct an
+optimization problem. Once a problem has been constructed, various
+methods for solving them will be discussed in
+:ref:`chapter-solving`. It is by design that the modeling and the
+solving APIs are orthogonal to each other. This enables
+switching/tweaking of various solver parameters without having to
+touch the problem once it has been successfully modeled.
:class:`CostFunction`
---------------------
+The single biggest task when modeling a problem is specifying the
+residuals and their derivatives. This is done using
+:class:`CostFunction` objects.
+
.. class:: CostFunction
.. code-block:: c++
@@ -66,7 +69,7 @@
.. math:: J_{ij} = \frac{\partial}{\partial x_{i_j}}f_i\left(x_{i_1},...,x_{i_k}\right),\quad \forall j \in \{i_1,..., i_k\}
- The signature of the class:`CostFunction` (number and sizes of
+ The signature of the :class:`CostFunction` (number and sizes of
input parameter blocks and number of outputs) is stored in
:member:`CostFunction::parameter_block_sizes_` and
:member:`CostFunction::num_residuals_` respectively. User code
@@ -77,18 +80,16 @@
.. function:: bool CostFunction::Evaluate(double const* const* parameters, double* residuals, double** jacobians)
- This is the key methods. It implements the residual and Jacobian
- computation.
+ Compute the residual vector and the Jacobian matrices.
``parameters`` is an array of pointers to arrays containing the
- various parameter blocks. parameters has the same number of
- elements as :member:`CostFunction::parameter_block_sizes_`.
- Parameter blocks are in the same order as
+ various parameter blocks. ``parameters`` has the same number of
+ elements as :member:`CostFunction::parameter_block_sizes_` and the
+ parameter blocks are in the same order as
:member:`CostFunction::parameter_block_sizes_`.
``residuals`` is an array of size ``num_residuals_``.
-
``jacobians`` is an array of size
:member:`CostFunction::parameter_block_sizes_` containing pointers
to storage for Jacobian matrices corresponding to each parameter
@@ -105,7 +106,7 @@
this is the case when computing cost only. If ``jacobians[i]`` is
``NULL``, then the Jacobian matrix corresponding to the
:math:`i^{\textrm{th}}` parameter block must not be returned, this
- is the case when the a parameter block is marked constant.
+ is the case when a parameter block is marked constant.
**NOTE** The return value indicates whether the computation of the
residuals and/or jacobians was successful or not.
@@ -132,10 +133,10 @@
.. class:: SizedCostFunction
If the size of the parameter blocks and the size of the residual
- vector is known at compile time (this is the common case), Ceres
- provides :class:`SizedCostFunction`, where these values can be
- specified as template parameters. In this case the user only needs
- to implement the :func:`CostFunction::Evaluate`.
+ vector is known at compile time (this is the common case),
+ :class:`SizeCostFunction` can be used where these values can be
+ specified as template parameters and the user only needs to
+ implement :func:`CostFunction::Evaluate`.
.. code-block:: c++
@@ -155,9 +156,28 @@
.. class:: AutoDiffCostFunction
- But even defining the :class:`SizedCostFunction` can be a tedious
- affair if complicated derivative computations are involved. To this
- end Ceres provides automatic differentiation.
+ Defining a :class:`CostFunction` or a :class:`SizedCostFunction`
+ can be a tedious and error prone especially when computing
+ derivatives. To this end Ceres provides `automatic differentiation
+ <http://en.wikipedia.org/wiki/Automatic_differentiation>`_.
+
+ .. code-block:: c++
+
+ template <typename CostFunctor,
+ int M, // Number of residuals, or ceres::DYNAMIC.
+ int N0, // Number of parameters in block 0.
+ int N1 = 0, // Number of parameters in block 1.
+ int N2 = 0, // Number of parameters in block 2.
+ int N3 = 0, // Number of parameters in block 3.
+ int N4 = 0, // Number of parameters in block 4.
+ int N5 = 0, // Number of parameters in block 5.
+ int N6 = 0, // Number of parameters in block 6.
+ int N7 = 0, // Number of parameters in block 7.
+ int N8 = 0, // Number of parameters in block 8.
+ int N9 = 0> // Number of parameters in block 9.
+ class AutoDiffCostFunction : public
+ SizedCostFunction<M, N0, N1, N2, N3, N4, N5, N6, N7, N8, N9> {
+ };
To get an auto differentiated cost function, you must define a
class with a templated ``operator()`` (a functor) that computes the
@@ -234,14 +254,14 @@
computing a 1-dimensional output from two arguments, both
2-dimensional.
- The framework can currently accommodate cost functions of up to 6
- independent variables, and there is no limit on the dimensionality of
- each of them.
+ The framework can currently accommodate cost functions of up to 10
+ independent variables, and there is no limit on the dimensionality
+ of each of them.
**WARNING 1** Since the functor will get instantiated with
different types for ``T``, you must convert from other numeric
types to ``T`` before mixing computations with other variables
- oftype ``T``. In the example above, this is seen where instead of
+ of type ``T``. In the example above, this is seen where instead of
using ``k_`` directly, ``k_`` is wrapped with ``T(k_)``.
**WARNING 2** A common beginner's error when first using
@@ -253,12 +273,74 @@
as the last template argument.
+:class:`DynamicAutoDiffCostFunction`
+------------------------------------
+
+.. class:: DynamicAutoDiffCostFunction
+
+ :class:`AutoDiffCostFunction` requires that the number of parameter
+ blocks and their sizes be known at compile time, e.g., Bezier curve
+ fitting, Neural Network training etc. It also has an upper limit of
+ 10 parameter blocks. In a number of applications, this is not
+ enough.
+
+ .. code-block:: c++
+
+ template <typename CostFunctor, int Stride = 4>
+ class DynamicAutoDiffCostFunction : public CostFunction {
+ };
+
+ In such cases :class:`DynamicAutoDiffCostFunction` can be
+ used. Like :class:`AutoDiffCostFunction` the user must define a
+ templated functor, but the signature of the functor differs
+ slightly. The expected interface for the cost functors is:
+
+ .. code-block:: c++
+
+ struct MyCostFunctor {
+ template<typename T>
+ bool operator()(T const* const* parameters, T* residuals) const {
+ }
+ }
+
+ Since the sizing of the parameters is done at runtime, you must
+ also specify the sizes after creating the dynamic autodiff cost
+ function. For example:
+
+ .. code-block:: c++
+
+ DynamicAutoDiffCostFunction<MyCostFunctor, 4> cost_function(
+ new MyCostFunctor());
+ cost_function.AddParameterBlock(5);
+ cost_function.AddParameterBlock(10);
+ cost_function.SetNumResiduals(21);
+
+ Under the hood, the implementation evaluates the cost function
+ multiple times, computing a small set of the derivatives (four by
+ default, controlled by the ``Stride`` template parameter) with each
+ pass. There is a performance tradeoff with the size of the passes;
+ Smaller sizes are more cache efficient but result in larger number
+ of passes, and larger stride lengths can destroy cache-locality
+ while reducing the number of passes over the cost function. The
+ optimal value depends on the number and sizes of the various
+ parameter blocks.
+
+ As a rule of thumb, try using :class:`AutoDiffCostFunction` before
+ you use :class:`DynamicAutoDiffCostFunction`.
+
:class:`NumericDiffCostFunction`
--------------------------------
.. class:: NumericDiffCostFunction
- .. code-block:: c++
+ In some cases, its not possible to define a templated cost functor,
+ for example when the evaluation of the residual involves a call to a
+ library function that you do not have control over. In such a
+ situation, `numerical differentiation
+ <http://en.wikipedia.org/wiki/Numerical_differentiation>`_ can be
+ used.
+
+ .. code-block:: c++
template <typename CostFunctionNoJacobian,
NumericDiffMethod method = CENTRAL, int M = 0,
@@ -268,12 +350,6 @@
: public SizedCostFunction<M, N0, N1, N2, N3, N4, N5, N6, N7, N8, N9> {
};
-
- Create a :class:`CostFunction` as needed by the least squares
- framework with jacobians computed via numeric (a.k.a. finite)
- differentiation. For more details see
- http://en.wikipedia.org/wiki/Numerical_differentiation.
-
To get an numerically differentiated :class:`CostFunction`, you
must define a class with a ``operator()`` (a functor) that computes
the residuals. The functor must write the computed value in the
@@ -335,14 +411,15 @@
CostFunction* cost_function
= new NumericDiffCostFunction<MyScalarCostFunctor, CENTRAL, 1, 2, 2>(
- new MyScalarCostFunctor(1.0)); ^ ^ ^
- | | | |
- Finite Differencing Scheme -+ | | |
- Dimension of residual ----------+ | |
- Dimension of x --------------------+ |
- Dimension of y -----------------------+
+ new MyScalarCostFunctor(1.0)); ^ ^ ^ ^
+ | | | |
+ Finite Differencing Scheme -+ | | |
+ Dimension of residual ------------+ | |
+ Dimension of x ----------------------+ |
+ Dimension of y -------------------------+
- In this example, there is usually an instance for each measumerent of `k`.
+ In this example, there is usually an instance for each measurement
+ of `k`.
In the instantiation above, the template parameters following
``MyScalarCostFunctor``, ``1, 2, 2``, describe the functor as
@@ -398,161 +475,18 @@
sizes 4 and 8 respectively. Look at the tests for a more detailed
example.
-
-:class:`NormalPrior`
---------------------
-
-.. class:: NormalPrior
-
- .. code-block:: c++
-
- class NormalPrior: public CostFunction {
- public:
- // Check that the number of rows in the vector b are the same as the
- // number of columns in the matrix A, crash otherwise.
- NormalPrior(const Matrix& A, const Vector& b);
-
- virtual bool Evaluate(double const* const* parameters,
- double* residuals,
- double** jacobians) const;
- };
-
- Implements a cost function of the form
-
- .. math:: cost(x) = ||A(x - b)||^2
-
- where, the matrix A and the vector b are fixed and x is the
- variable. In case the user is interested in implementing a cost
- function of the form
-
- .. math:: cost(x) = (x - \mu)^T S^{-1} (x - \mu)
-
- where, :math:`\mu` is a vector and :math:`S` is a covariance matrix,
- then, :math:`A = S^{-1/2}`, i.e the matrix :math:`A` is the square
- root of the inverse of the covariance, also known as the stiffness
- matrix. There are however no restrictions on the shape of
- :math:`A`. It is free to be rectangular, which would be the case if
- the covariance matrix :math:`S` is rank deficient.
-
-
-:class:`ConditionedCostFunction`
---------------------------------
-
-.. class:: ConditionedCostFunction
-
- This class allows you to apply different conditioning to the residual
- values of a wrapped cost function. An example where this is useful is
- where you have an existing cost function that produces N values, but you
- want the total cost to be something other than just the sum of these
- squared values - maybe you want to apply a different scaling to some
- values, to change their contribution to the cost.
-
- Usage:
-
- .. code-block:: c++
-
- // my_cost_function produces N residuals
- CostFunction* my_cost_function = ...
- CHECK_EQ(N, my_cost_function->num_residuals());
- vector<CostFunction*> conditioners;
-
- // Make N 1x1 cost functions (1 parameter, 1 residual)
- CostFunction* f_1 = ...
- conditioners.push_back(f_1);
-
- CostFunction* f_N = ...
- conditioners.push_back(f_N);
- ConditionedCostFunction* ccf =
- new ConditionedCostFunction(my_cost_function, conditioners);
-
-
- Now ``ccf`` 's ``residual[i]`` (i=0..N-1) will be passed though the
- :math:`i^{\text{th}}` conditioner.
-
- .. code-block:: c++
-
- ccf_residual[i] = f_i(my_cost_function_residual[i])
-
- and the Jacobian will be affected appropriately.
-
-:class:`CostFunctionToFunctor`
-------------------------------
-
-.. class:: CostFunctionToFunctor
-
- :class:`CostFunctionToFunctor` is an adapter class that allows users to use
- :class:`CostFunction` objects in templated functors which are to be used for
- automatic differentiation. This allows the user to seamlessly mix
- analytic, numeric and automatic differentiation.
-
- For example, let us assume that
-
- .. code-block:: c++
-
- class IntrinsicProjection : public SizedCostFunction<2, 5, 3> {
- public:
- IntrinsicProjection(const double* observations);
- virtual bool Evaluate(double const* const* parameters,
- double* residuals,
- double** jacobians) const;
- };
-
- is a :class:`CostFunction` that implements the projection of a
- point in its local coordinate system onto its image plane and
- subtracts it from the observed point projection. It can compute its
- residual and either via analytic or numerical differentiation can
- compute its jacobians.
-
- Now we would like to compose the action of this
- :class:`CostFunction` with the action of camera extrinsics, i.e.,
- rotation and translation. Say we have a templated function
-
- .. code-block:: c++
-
- template<typename T>
- void RotateAndTranslatePoint(const T* rotation,
- const T* translation,
- const T* point,
- T* result);
-
-
- Then we can now do the following,
-
- .. code-block:: c++
-
- struct CameraProjection {
- CameraProjection(double* observation) {
- intrinsic_projection_.reset(
- new CostFunctionToFunctor<2, 5, 3>(new IntrinsicProjection(observation_)));
- }
- template <typename T>
- bool operator()(const T* rotation,
- const T* translation,
- const T* intrinsics,
- const T* point,
- T* residual) const {
- T transformed_point[3];
- RotateAndTranslatePoint(rotation, translation, point, transformed_point);
-
- // Note that we call intrinsic_projection_, just like it was
- // any other templated functor.
- return (*intrinsic_projection_)(intrinsics, transformed_point, residual);
- }
-
- private:
- scoped_ptr<CostFunctionToFunctor<2,5,3> > intrinsic_projection_;
- };
-
-
:class:`NumericDiffFunctor`
---------------------------
.. class:: NumericDiffFunctor
- A wrapper class that takes a variadic functor evaluating a
- function, numerically differentiates it and makes it available as a
- templated functor so that it can be easily used as part of Ceres'
- automatic differentiation framework.
+ Sometimes parts of a cost function can be differentiated
+ automatically or analytically but others require numeric
+ differentiation. :class:`NumericDiffFunctor` is a wrapper class
+ that takes a variadic functor evaluating a function, numerically
+ differentiates it and makes it available as a templated functor so
+ that it can be easily used as part of Ceres' automatic
+ differentiation framework.
For example, let us assume that
@@ -627,6 +561,158 @@
differentiated version of ``IntrinsicProjection``.
+:class:`CostFunctionToFunctor`
+------------------------------
+
+.. class:: CostFunctionToFunctor
+
+ Just like :class:`NumericDiffFunctor` allows numeric
+ differentiation to be mixed with automatic differentiation,
+ :class:`CostFunctionToFunctor` provides an even more general
+ mechanism. :class:`CostFunctionToFunctor` is an adapter class that
+ allows users to use :class:`CostFunction` objects in templated
+ functors which are to be used for automatic differentiation. This
+ allows the user to seamlessly mix analytic, numeric and automatic
+ differentiation.
+
+ For example, let us assume that
+
+ .. code-block:: c++
+
+ class IntrinsicProjection : public SizedCostFunction<2, 5, 3> {
+ public:
+ IntrinsicProjection(const double* observations);
+ virtual bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const;
+ };
+
+ is a :class:`CostFunction` that implements the projection of a
+ point in its local coordinate system onto its image plane and
+ subtracts it from the observed point projection. It can compute its
+ residual and either via analytic or numerical differentiation can
+ compute its jacobians.
+
+ Now we would like to compose the action of this
+ :class:`CostFunction` with the action of camera extrinsics, i.e.,
+ rotation and translation. Say we have a templated function
+
+ .. code-block:: c++
+
+ template<typename T>
+ void RotateAndTranslatePoint(const T* rotation,
+ const T* translation,
+ const T* point,
+ T* result);
+
+
+ Then we can now do the following,
+
+ .. code-block:: c++
+
+ struct CameraProjection {
+ CameraProjection(double* observation) {
+ intrinsic_projection_.reset(
+ new CostFunctionToFunctor<2, 5, 3>(new IntrinsicProjection(observation_)));
+ }
+ template <typename T>
+ bool operator()(const T* rotation,
+ const T* translation,
+ const T* intrinsics,
+ const T* point,
+ T* residual) const {
+ T transformed_point[3];
+ RotateAndTranslatePoint(rotation, translation, point, transformed_point);
+
+ // Note that we call intrinsic_projection_, just like it was
+ // any other templated functor.
+ return (*intrinsic_projection_)(intrinsics, transformed_point, residual);
+ }
+
+ private:
+ scoped_ptr<CostFunctionToFunctor<2,5,3> > intrinsic_projection_;
+ };
+
+
+
+:class:`ConditionedCostFunction`
+--------------------------------
+
+.. class:: ConditionedCostFunction
+
+ This class allows you to apply different conditioning to the residual
+ values of a wrapped cost function. An example where this is useful is
+ where you have an existing cost function that produces N values, but you
+ want the total cost to be something other than just the sum of these
+ squared values - maybe you want to apply a different scaling to some
+ values, to change their contribution to the cost.
+
+ Usage:
+
+ .. code-block:: c++
+
+ // my_cost_function produces N residuals
+ CostFunction* my_cost_function = ...
+ CHECK_EQ(N, my_cost_function->num_residuals());
+ vector<CostFunction*> conditioners;
+
+ // Make N 1x1 cost functions (1 parameter, 1 residual)
+ CostFunction* f_1 = ...
+ conditioners.push_back(f_1);
+
+ CostFunction* f_N = ...
+ conditioners.push_back(f_N);
+ ConditionedCostFunction* ccf =
+ new ConditionedCostFunction(my_cost_function, conditioners);
+
+
+ Now ``ccf`` 's ``residual[i]`` (i=0..N-1) will be passed though the
+ :math:`i^{\text{th}}` conditioner.
+
+ .. code-block:: c++
+
+ ccf_residual[i] = f_i(my_cost_function_residual[i])
+
+ and the Jacobian will be affected appropriately.
+
+
+:class:`NormalPrior`
+--------------------
+
+.. class:: NormalPrior
+
+ .. code-block:: c++
+
+ class NormalPrior: public CostFunction {
+ public:
+ // Check that the number of rows in the vector b are the same as the
+ // number of columns in the matrix A, crash otherwise.
+ NormalPrior(const Matrix& A, const Vector& b);
+
+ virtual bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const;
+ };
+
+ Implements a cost function of the form
+
+ .. math:: cost(x) = ||A(x - b)||^2
+
+ where, the matrix A and the vector b are fixed and x is the
+ variable. In case the user is interested in implementing a cost
+ function of the form
+
+ .. math:: cost(x) = (x - \mu)^T S^{-1} (x - \mu)
+
+ where, :math:`\mu` is a vector and :math:`S` is a covariance matrix,
+ then, :math:`A = S^{-1/2}`, i.e the matrix :math:`A` is the square
+ root of the inverse of the covariance, also known as the stiffness
+ matrix. There are however no restrictions on the shape of
+ :math:`A`. It is free to be rectangular, which would be the case if
+ the covariance matrix :math:`S` is rank deficient.
+
+
+
:class:`LossFunction`
---------------------
@@ -689,7 +775,6 @@
**Scaling**
-
Given one robustifier :math:`\rho(s)` one can change the length
scale at which robustification takes place, by adding a scale
factor :math:`a > 0` which gives us :math:`\rho(s,a) = a^2 \rho(s /
@@ -705,9 +790,9 @@
Instances
^^^^^^^^^
-Ceres includes a number of other loss functions. For simplicity we
-described their unscaled versions. The figure below illustrates their
-shape graphically. More details can be found in
+Ceres includes a number of predefined loss functions. For simplicity
+we described their unscaled versions. The figure below illustrates
+their shape graphically. More details can be found in
``include/ceres/loss_function.h``.
.. figure:: loss.png
@@ -743,10 +828,68 @@
.. class:: ComposedLoss
+ Given two loss functions ``f`` and ``g``, implements the loss
+ function ``h(s) = f(g(s))``.
+
+ .. code-block:: c++
+
+ class ComposedLoss : public LossFunction {
+ public:
+ explicit ComposedLoss(const LossFunction* f,
+ Ownership ownership_f,
+ const LossFunction* g,
+ Ownership ownership_g);
+ };
+
.. class:: ScaledLoss
+ Sometimes you want to simply scale the output value of the
+ robustifier. For example, you might want to weight different error
+ terms differently (e.g., weight pixel reprojection errors
+ differently from terrain errors).
+
+ Given a loss function :math:`\rho(s)` and a scalar :math:`a`, :class:`ScaledLoss`
+ implements the function :math:`a \rho(s)`.
+
+ Since we treat the a ``NULL`` Loss function as the Identity loss
+ function, :math:`rho` = ``NULL``: is a valid input and will result
+ in the input being scaled by :math:`a`. This provides a simple way
+ of implementing a scaled ResidualBlock.
+
.. class:: LossFunctionWrapper
+ Sometimes after the optimization problem has been constructed, we
+ wish to mutate the scale of the loss function. For example, when
+ performing estimation from data which has substantial outliers,
+ convergence can be improved by starting out with a large scale,
+ optimizing the problem and then reducing the scale. This can have
+ better convergence behavior than just using a loss function with a
+ small scale.
+
+ This templated class allows the user to implement a loss function
+ whose scale can be mutated after an optimization problem has been
+ constructed. e.g,
+
+ .. code-block:: c++
+
+ Problem problem;
+
+ // Add parameter blocks
+
+ CostFunction* cost_function =
+ new AutoDiffCostFunction < UW_Camera_Mapper, 2, 9, 3>(
+ new UW_Camera_Mapper(feature_x, feature_y));
+
+ LossFunctionWrapper* loss_function(new HuberLoss(1.0), TAKE_OWNERSHIP);
+ problem.AddResidualBlock(cost_function, loss_function, parameters);
+
+ Solver::Options options;
+ Solver::Summary summary;
+ Solve(options, &problem, &summary);
+
+ loss_function->Reset(new HuberLoss(1.0), TAKE_OWNERSHIP);
+ Solve(options, &problem, &summary);
+
Theory
^^^^^^
@@ -763,8 +906,8 @@
.. math::
- g(x) &= \rho'J^\top(x)f(x)\\
- H(x) &= J^\top(x)\left(\rho' + 2 \rho''f(x)f^\top(x)\right)J(x)
+ g(x) &= \rho'J^\top(x)f(x)\\
+ H(x) &= J^\top(x)\left(\rho' + 2 \rho''f(x)f^\top(x)\right)J(x)
where the terms involving the second derivatives of :math:`f(x)` have
been ignored. Note that :math:`H(x)` is indefinite if
@@ -783,9 +926,9 @@
.. math::
- \tilde{f}(x) &= \frac{\sqrt{\rho'}}{1 - \alpha} f(x)\\
- \tilde{J}(x) &= \sqrt{\rho'}\left(1 - \alpha
- \frac{f(x)f^\top(x)}{\left\|f(x)\right\|^2} \right)J(x)
+ \tilde{f}(x) &= \frac{\sqrt{\rho'}}{1 - \alpha} f(x)\\
+ \tilde{J}(x) &= \sqrt{\rho'}\left(1 - \alpha
+ \frac{f(x)f^\top(x)}{\left\|f(x)\right\|^2} \right)J(x)
In the case :math:`2 \rho''\left\|f(x)\right\|^2 + \rho' \lesssim 0`,
@@ -793,7 +936,7 @@
:math:`\epsilon`. For more details see [Triggs]_.
With this simple rescaling, one can use any Jacobian based non-linear
-least squares algorithm to robustifed non-linear least squares
+least squares algorithm to robustified non-linear least squares
problems.
@@ -917,6 +1060,80 @@
of :eq:`quaternion`.
+
+:class:`AutoDiffLocalParameterization`
+--------------------------------------
+
+.. class:: AutoDiffLocalParameterization
+
+ :class:`AutoDiffLocalParameterization` does for
+ :class:`LocalParameterization` what :class:`AutoDiffCostFunction`
+ does for :class:`CostFunction`. It allows the user to define a
+ templated functor that implements the
+ :func:`LocalParameterization::Plus` operation and it uses automatic
+ differentiation to implement the computation of the Jacobian.
+
+ To get an auto differentiated local parameterization, you must
+ define a class with a templated operator() (a functor) that computes
+
+ .. math:: x' = \boxplus(x, \Delta x),
+
+ For example, Quaternions have a three dimensional local
+ parameterization. It's plus operation can be implemented as (taken
+ from `internal/ceres/auto_diff_local_parameterization_test.cc
+ <https://ceres-solver.googlesource.com/ceres-solver/+/master/include/ceres/local_parameterization.h>`_
+ )
+
+ .. code-block:: c++
+
+ struct QuaternionPlus {
+ template<typename T>
+ bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
+ const T squared_norm_delta =
+ delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
+
+ T q_delta[4];
+ if (squared_norm_delta > T(0.0)) {
+ T norm_delta = sqrt(squared_norm_delta);
+ const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
+ q_delta[0] = cos(norm_delta);
+ q_delta[1] = sin_delta_by_delta * delta[0];
+ q_delta[2] = sin_delta_by_delta * delta[1];
+ q_delta[3] = sin_delta_by_delta * delta[2];
+ } else {
+ // We do not just use q_delta = [1,0,0,0] here because that is a
+ // constant and when used for automatic differentiation will
+ // lead to a zero derivative. Instead we take a first order
+ // approximation and evaluate it at zero.
+ q_delta[0] = T(1.0);
+ q_delta[1] = delta[0];
+ q_delta[2] = delta[1];
+ q_delta[3] = delta[2];
+ }
+
+ Quaternionproduct(q_delta, x, x_plus_delta);
+ return true;
+ }
+ };
+
+ Then given this struct, the auto differentiated local
+ parameterization can now be constructed as
+
+ .. code-block:: c++
+
+ LocalParameterization* local_parameterization =
+ new AutoDiffLocalParameterization<QuaternionPlus, 4, 3>;
+ | |
+ Global Size ---------------+ |
+ Local Size -------------------+
+
+ **WARNING:** Since the functor will get instantiated with different
+ types for ``T``, you must to convert from other numeric types to
+ ``T`` before mixing computations with other variables of type
+ ``T``. In the example above, this is seen where instead of using
+ ``k_`` directly, ``k_`` is wrapped with ``T(k_)``.
+
+
:class:`Problem`
----------------
@@ -953,31 +1170,18 @@
of the term is just the squared norm of the residuals.
The user has the option of explicitly adding the parameter blocks
- using :func:`Problem::AddParameterBlock`. This causes additional correctness
- checking; however, :func:`Problem::AddResidualBlock` implicitly adds the
- parameter blocks if they are not present, so calling
- :func:`Problem::AddParameterBlock` explicitly is not required.
-
-
- :class:`Problem` by default takes ownership of the ``cost_function`` and
- ``loss_function`` pointers. These objects remain live for the life of
- the :class:`Problem` object. If the user wishes to keep control over the
- destruction of these objects, then they can do this by setting the
- corresponding enums in the ``Problem::Options`` struct.
-
-
- Note that even though the Problem takes ownership of ``cost_function``
- and ``loss_function``, it does not preclude the user from re-using
- them in another residual block. The destructor takes care to call
- delete on each ``cost_function`` or ``loss_function`` pointer only
- once, regardless of how many residual blocks refer to them.
+ using :func:`Problem::AddParameterBlock`. This causes additional
+ correctness checking; however, :func:`Problem::AddResidualBlock`
+ implicitly adds the parameter blocks if they are not present, so
+ calling :func:`Problem::AddParameterBlock` explicitly is not
+ required.
:func:`Problem::AddParameterBlock` explicitly adds a parameter
block to the :class:`Problem`. Optionally it allows the user to
- associate a :class:`LocalParameterization` object with the parameter
- block too. Repeated calls with the same arguments are
+ associate a :class:`LocalParameterization` object with the
+ parameter block too. Repeated calls with the same arguments are
ignored. Repeated calls with the same double pointer but a
- different size results in undefined behaviour.
+ different size results in undefined behavior.
You can set any parameter block to be constant using
:func:`Problem::SetParameterBlockConstant` and undo this using
@@ -1003,11 +1207,11 @@
destruction of these objects, then they can do this by setting the
corresponding enums in the :class:`Problem::Options` struct.
- Even though :class:`Problem` takes ownership of these pointers, it
- does not preclude the user from re-using them in another residual
- or parameter block. The destructor takes care to call delete on
- each pointer only once.
-
+ Note that even though the Problem takes ownership of ``cost_function``
+ and ``loss_function``, it does not preclude the user from re-using
+ them in another residual block. The destructor takes care to call
+ delete on each ``cost_function`` or ``loss_function`` pointer only
+ once, regardless of how many residual blocks refer to them.
.. function:: ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, const vector<double*> parameter_blocks)
@@ -1056,14 +1260,14 @@
Add a parameter block with appropriate size to the problem.
Repeated calls with the same arguments are ignored. Repeated calls
with the same double pointer but a different size results in
- undefined behaviour.
+ undefined behavior.
.. function:: void Problem::AddParameterBlock(double* values, int size)
Add a parameter block with appropriate size and parameterization to
the problem. Repeated calls with the same arguments are
ignored. Repeated calls with the same double pointer but a
- different size results in undefined behaviour.
+ different size results in undefined behavior.
.. function:: void Problem::RemoveResidualBlock(ResidualBlockId residual_block)
@@ -1076,8 +1280,8 @@
the removal is fast (almost constant time). Otherwise, removing a
parameter block will incur a scan of the entire Problem object.
- WARNING: Removing a residual or parameter block will destroy the
- implicit ordering, rendering the jacobian or residuals returned
+ **WARNING:** Removing a residual or parameter block will destroy
+ the implicit ordering, rendering the jacobian or residuals returned
from the solver uninterpretable. If you depend on the evaluated
jacobian, do not use remove! This may change in a future release.
@@ -1088,10 +1292,10 @@
residual block will not get deleted immediately; won't happen until the
problem itself is deleted.
- WARNING: Removing a residual or parameter block will destroy the implicit
- ordering, rendering the jacobian or residuals returned from the solver
- uninterpretable. If you depend on the evaluated jacobian, do not use
- remove! This may change in a future release.
+ **WARNING:** Removing a residual or parameter block will destroy
+ the implicit ordering, rendering the jacobian or residuals returned
+ from the solver uninterpretable. If you depend on the evaluated
+ jacobian, do not use remove! This may change in a future release.
Hold the indicated parameter block constant during optimization.
@@ -1103,7 +1307,6 @@
Allow the indicated parameter to vary during optimization.
-
.. function:: void Problem::SetParameterization(double* values, LocalParameterization* local_parameterization)
Set the local parameterization for one of the parameter blocks.
@@ -1133,6 +1336,23 @@
The size of the residual vector obtained by summing over the sizes
of all of the residual blocks.
+.. function int Problem::ParameterBlockSize(const double* values) const;
+
+ The size of the parameter block.
+
+.. function int Problem::ParameterBlockLocalSize(const double* values) const;
+
+ The size of local parameterization for the parameter block. If
+ there is no local parameterization associated with this parameter
+ block, then ``ParameterBlockLocalSize`` = ``ParameterBlockSize``.
+
+
+.. function void Problem::GetParameterBlocks(vector<double*>* parameter_blocks) const;
+
+ Fills the passed ``parameter_blocks`` vector with pointers to the
+ parameter blocks currently in the problem. After this call,
+ ``parameter_block.size() == NumParameterBlocks``.
+
.. function:: bool Problem::Evaluate(const Problem::EvaluateOptions& options, double* cost, vector<double>* residuals, vector<double>* gradient, CRSMatrix* jacobian)
Evaluate a :class:`Problem`. Any of the output pointers can be
@@ -1148,7 +1368,6 @@
double cost = 0.0;
problem.Evaluate(Problem::EvaluateOptions(), &cost, NULL, NULL, NULL);
-
The cost is evaluated at `x = 1`. If you wish to evaluate the
problem at `x = 2`, then
diff --git a/docs/source/reading.rst b/docs/source/reading.rst
index 0612aa0..4e27567 100644
--- a/docs/source/reading.rst
+++ b/docs/source/reading.rst
@@ -6,5 +6,5 @@
the booklet by [Madsen]_ . For a general introduction to non-linear
optimization we recommend [NocedalWright]_. [Bjorck]_ remains the
seminal reference on least squares problems. [TrefethenBau]_ book is
-our favourite text on introductory numerical linear algebra. [Triggs]_
+our favorite text on introductory numerical linear algebra. [Triggs]_
provides a thorough coverage of the bundle adjustment problem.
diff --git a/docs/source/solving.rst b/docs/source/solving.rst
index 3fd4c87..a4564d6 100644
--- a/docs/source/solving.rst
+++ b/docs/source/solving.rst
@@ -5,9 +5,9 @@
.. _chapter-solving:
-==========
-Solver API
-==========
+=======
+Solving
+=======
Introduction
@@ -15,10 +15,8 @@
Effective use of Ceres requires some familiarity with the basic
components of a nonlinear least squares solver, so before we describe
-how to configure the solver, we will begin by taking a brief look at
-how some of the core optimization algorithms in Ceres work and the
-various linear solvers and preconditioners that power it.
-
+how to configure and use the solver, we will take a brief look at how
+some of the core optimization algorithms in Ceres work.
Let :math:`x \in \mathbb{R}^n` be an :math:`n`-dimensional vector of
variables, and
@@ -72,9 +70,8 @@
Trust region methods are in some sense dual to line search methods:
trust region methods first choose a step size (the size of the trust
region) and then a step direction while line search methods first
-choose a step direction and then a step size.
-
-Ceres implements multiple algorithms in both categories.
+choose a step direction and then a step size. Ceres implements
+multiple algorithms in both categories.
.. _section-trust-region-methods:
@@ -117,7 +114,7 @@
.. rubric:: Footnotes
-.. [#f1] At the level of the non-linear solver, the block and
+.. [#f1] At the level of the non-linear solver, the block
structure is not relevant, therefore our discussion here is
in terms of an optimization problem defined over a state
vector of size :math:`n`.
@@ -327,7 +324,7 @@
objective function.
This is because allowing for non-decreasing objective function values
-in a princpled manner allows the algorithm to *jump over boulders* as
+in a principled manner allows the algorithm to *jump over boulders* as
the method is not restricted to move into narrow valleys while
preserving its convergence properties.
@@ -506,7 +503,7 @@
.. math:: \left[ \begin{matrix} B & E\\ E^\top & C \end{matrix}
\right]\left[ \begin{matrix} \Delta y \\ \Delta z
- \end{matrix} \right] = \left[ \begin{matrix} v\\ w
+ \end{matrix} \right] = \left[ \begin{matrix} v\\ w
\end{matrix} \right]\ ,
:label: linear2
@@ -898,7 +895,7 @@
Default: ``1e-3``
Lower threshold for relative decrease before a trust-region step is
- acceped.
+ accepted.
.. member:: double Solver::Options::lm_min_diagonal
@@ -1242,30 +1239,326 @@
:class:`IterationCallback`
--------------------------
+.. class:: IterationSummary
+
+ :class:`IterationSummary` describes the state of the optimizer
+ after each iteration of the minimization.
+
+ .. code-block:: c++
+
+ struct IterationSummary {
+ // Current iteration number.
+ int32 iteration;
+
+ // Step was numerically valid, i.e., all values are finite and the
+ // step reduces the value of the linearized model.
+ //
+ // Note: step_is_valid is false when iteration = 0.
+ bool step_is_valid;
+
+ // Step did not reduce the value of the objective function
+ // sufficiently, but it was accepted because of the relaxed
+ // acceptance criterion used by the non-monotonic trust region
+ // algorithm.
+ //
+ // Note: step_is_nonmonotonic is false when iteration = 0;
+ bool step_is_nonmonotonic;
+
+ // Whether or not the minimizer accepted this step or not. If the
+ // ordinary trust region algorithm is used, this means that the
+ // relative reduction in the objective function value was greater
+ // than Solver::Options::min_relative_decrease. However, if the
+ // non-monotonic trust region algorithm is used
+ // (Solver::Options:use_nonmonotonic_steps = true), then even if the
+ // relative decrease is not sufficient, the algorithm may accept the
+ // step and the step is declared successful.
+ //
+ // Note: step_is_successful is false when iteration = 0.
+ bool step_is_successful;
+
+ // Value of the objective function.
+ double cost;
+
+ // Change in the value of the objective function in this
+ // iteration. This can be positive or negative.
+ double cost_change;
+
+ // Infinity norm of the gradient vector.
+ double gradient_max_norm;
+
+ // 2-norm of the size of the step computed by the optimization
+ // algorithm.
+ double step_norm;
+
+ // For trust region algorithms, the ratio of the actual change in
+ // cost and the change in the cost of the linearized approximation.
+ double relative_decrease;
+
+ // Size of the trust region at the end of the current iteration. For
+ // the Levenberg-Marquardt algorithm, the regularization parameter
+ // mu = 1.0 / trust_region_radius.
+ double trust_region_radius;
+
+ // For the inexact step Levenberg-Marquardt algorithm, this is the
+ // relative accuracy with which the Newton(LM) step is solved. This
+ // number affects only the iterative solvers capable of solving
+ // linear systems inexactly. Factorization-based exact solvers
+ // ignore it.
+ double eta;
+
+ // Step sized computed by the line search algorithm.
+ double step_size;
+
+ // Number of function evaluations used by the line search algorithm.
+ int line_search_function_evaluations;
+
+ // Number of iterations taken by the linear solver to solve for the
+ // Newton step.
+ int linear_solver_iterations;
+
+ // Time (in seconds) spent inside the minimizer loop in the current
+ // iteration.
+ double iteration_time_in_seconds;
+
+ // Time (in seconds) spent inside the trust region step solver.
+ double step_solver_time_in_seconds;
+
+ // Time (in seconds) since the user called Solve().
+ double cumulative_time_in_seconds;
+ };
+
.. class:: IterationCallback
- TBD
+ .. code-block:: c++
+
+ class IterationCallback {
+ public:
+ virtual ~IterationCallback() {}
+ virtual CallbackReturnType operator()(const IterationSummary& summary) = 0;
+ };
+
+ Interface for specifying callbacks that are executed at the end of
+ each iteration of the Minimizer. The solver uses the return value of
+ ``operator()`` to decide whether to continue solving or to
+ terminate. The user can return three values.
+
+ #. ``SOLVER_ABORT`` indicates that the callback detected an abnormal
+ situation. The solver returns without updating the parameter
+ blocks (unless ``Solver::Options::update_state_every_iteration`` is
+ set true). Solver returns with ``Solver::Summary::termination_type``
+ set to ``USER_ABORT``.
+
+ #. ``SOLVER_TERMINATE_SUCCESSFULLY`` indicates that there is no need
+ to optimize anymore (some user specified termination criterion
+ has been met). Solver returns with
+ ``Solver::Summary::termination_type``` set to ``USER_SUCCESS``.
+
+ #. ``SOLVER_CONTINUE`` indicates that the solver should continue
+ optimizing.
+
+ For example, the following ``IterationCallback`` is used internally
+ by Ceres to log the progress of the optimization.
+
+ .. code-block:: c++
+
+ class LoggingCallback : public IterationCallback {
+ public:
+ explicit LoggingCallback(bool log_to_stdout)
+ : log_to_stdout_(log_to_stdout) {}
+
+ ~LoggingCallback() {}
+
+ CallbackReturnType operator()(const IterationSummary& summary) {
+ const char* kReportRowFormat =
+ "% 4d: f:% 8e d:% 3.2e g:% 3.2e h:% 3.2e "
+ "rho:% 3.2e mu:% 3.2e eta:% 3.2e li:% 3d";
+ string output = StringPrintf(kReportRowFormat,
+ summary.iteration,
+ summary.cost,
+ summary.cost_change,
+ summary.gradient_max_norm,
+ summary.step_norm,
+ summary.relative_decrease,
+ summary.trust_region_radius,
+ summary.eta,
+ summary.linear_solver_iterations);
+ if (log_to_stdout_) {
+ cout << output << endl;
+ } else {
+ VLOG(1) << output;
+ }
+ return SOLVER_CONTINUE;
+ }
+
+ private:
+ const bool log_to_stdout_;
+ };
+
+
:class:`CRSMatrix`
------------------
.. class:: CRSMatrix
- TBD
+ .. code-block:: c++
+
+ struct CRSMatrix {
+ int num_rows;
+ int num_cols;
+ vector<int> cols;
+ vector<int> rows;
+ vector<double> values;
+ };
+
+ A compressed row sparse matrix used primarily for communicating the
+ Jacobian matrix to the user.
+
+ A compressed row matrix stores its contents in three arrays,
+ ``rows``, ``cols`` and ``values``.
+
+ ``rows`` is a ``num_rows + 1`` sized array that points into the ``cols`` and
+ ``values`` array. For each row ``i``:
+
+ ``cols[rows[i]]`` ... ``cols[rows[i + 1] - 1]`` are the indices of the
+ non-zero columns of row ``i``.
+
+ ``values[rows[i]]`` ... ``values[rows[i + 1] - 1]`` are the values of the
+ corresponding entries.
+
+ ``cols`` and ``values`` contain as many entries as there are
+ non-zeros in the matrix.
+
+ e.g, consider the 3x4 sparse matrix
+
+ .. code-block:: c++
+
+ 0 10 0 4
+ 0 2 -3 2
+ 1 2 0 0
+
+ The three arrays will be:
+
+ .. code-block:: c++
+
+ -row0- ---row1--- -row2-
+ rows = [ 0, 2, 5, 7]
+ cols = [ 1, 3, 1, 2, 3, 0, 1]
+ values = [10, 4, 2, -3, 2, 1, 2]
+
:class:`Solver::Summary`
------------------------
.. class:: Solver::Summary
- TBD
+ .. code-block:: c++
+
+ struct Summary {
+ // A brief one line description of the state of the solver after
+ // termination.
+ string BriefReport() const;
+
+ // A full multiline description of the state of the solver after
+ // termination.
+ string FullReport() const;
+
+ // Minimizer summary -------------------------------------------------
+ MinimizerType minimizer_type;
+
+ SolverTerminationType termination_type;
+
+ // If the solver did not run, or there was a failure, a
+ // description of the error.
+ string error;
+
+ // Cost of the problem before and after the optimization. See
+ // problem.h for definition of the cost of a problem.
+ double initial_cost;
+ double final_cost;
+
+ // The part of the total cost that comes from residual blocks that
+ // were held fixed by the preprocessor because all the parameter
+ // blocks that they depend on were fixed.
+ double fixed_cost;
+
+ vector<IterationSummary> iterations;
+
+ int num_successful_steps;
+ int num_unsuccessful_steps;
+ int num_inner_iteration_steps;
+
+ // When the user calls Solve, before the actual optimization
+ // occurs, Ceres performs a number of preprocessing steps. These
+ // include error checks, memory allocations, and reorderings. This
+ // time is accounted for as preprocessing time.
+ double preprocessor_time_in_seconds;
+
+ // Time spent in the TrustRegionMinimizer.
+ double minimizer_time_in_seconds;
+
+ // After the Minimizer is finished, some time is spent in
+ // re-evaluating residuals etc. This time is accounted for in the
+ // postprocessor time.
+ double postprocessor_time_in_seconds;
+
+ // Some total of all time spent inside Ceres when Solve is called.
+ double total_time_in_seconds;
+
+ double linear_solver_time_in_seconds;
+ double residual_evaluation_time_in_seconds;
+ double jacobian_evaluation_time_in_seconds;
+ double inner_iteration_time_in_seconds;
+
+ // Preprocessor summary.
+ int num_parameter_blocks;
+ int num_parameters;
+ int num_effective_parameters;
+ int num_residual_blocks;
+ int num_residuals;
+
+ int num_parameter_blocks_reduced;
+ int num_parameters_reduced;
+ int num_effective_parameters_reduced;
+ int num_residual_blocks_reduced;
+ int num_residuals_reduced;
+
+ int num_eliminate_blocks_given;
+ int num_eliminate_blocks_used;
+
+ int num_threads_given;
+ int num_threads_used;
+
+ int num_linear_solver_threads_given;
+ int num_linear_solver_threads_used;
+
+ LinearSolverType linear_solver_type_given;
+ LinearSolverType linear_solver_type_used;
+
+ vector<int> linear_solver_ordering_given;
+ vector<int> linear_solver_ordering_used;
+
+ bool inner_iterations_given;
+ bool inner_iterations_used;
+
+ vector<int> inner_iteration_ordering_given;
+ vector<int> inner_iteration_ordering_used;
+
+ PreconditionerType preconditioner_type;
+
+ TrustRegionStrategyType trust_region_strategy_type;
+ DoglegType dogleg_type;
+
+ SparseLinearAlgebraLibraryType sparse_linear_algebra_library;
+
+ LineSearchDirectionType line_search_direction_type;
+ LineSearchType line_search_type;
+ int max_lbfgs_rank;
+ };
+
+
:class:`GradientChecker`
------------------------
.. class:: GradientChecker
-
-
-
-
-
diff --git a/docs/source/tutorial.rst b/docs/source/tutorial.rst
index 7e61386..bad5039 100644
--- a/docs/source/tutorial.rst
+++ b/docs/source/tutorial.rst
@@ -221,11 +221,9 @@
--------------------
In some cases, using automatic differentiation is not possible. For
-example, Ceres currently does not support automatic differentiation of
-functors with dynamically sized parameter blocks. Or it may be the
-case that it is more efficient to compute the derivatives in closed
-form instead of relying on the chain rule used by the automatic
-differentition code.
+example, it may be the case that it is more efficient to compute the
+derivatives in closed form instead of relying on the chain rule used
+by the automatic differentiation code.
In such cases, it is possible to supply your own residual and jacobian
computation code. To do this, define a subclass of
@@ -268,6 +266,20 @@
yourself, you use :class:`AutoDiffCostFunction` or
:class:`NumericDiffCostFunction` to construct your residual blocks.
+More About Derivatives
+----------------------
+
+Computing derivatives is by far the most complicated part of using
+Ceres, and depending on the circumstance the user may need more
+sophisticated ways of computing derivatives. This section just
+scratches the surface of how derivatives can be supplied to
+Ceres. Once you are comfortable with using
+:class:`NumericDiffCostFunction` and :class:`AutoDiffCostFunction` we
+recommend taking a look at :class:`DynamicAutoDiffCostFunction`,
+:class:`CostFunctionToFunctor`, :class:`NumericDiffFunctor` and
+:class:`ConditionedCostFunction` for more advanced ways of
+constructing and computing cost functions.
+
.. rubric:: Footnotes
.. [#f3] `examples/helloworld_numeric_diff.cc
@@ -286,7 +298,6 @@
of Powell's function. Let :math:`x = \left[x_1, x_2, x_3, x_4 \right]`
and
-
.. math::
\begin{align}
@@ -342,9 +353,7 @@
Note that each ``ResidualBlock`` only depends on the two parameters
that the corresponding residual object depends on and not on all four
-parameters.
-
-Compiling and running `examples/powell.cc
+parameters. Compiling and running `examples/powell.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_
gives us:
diff --git a/docs/source/version_history.rst b/docs/source/version_history.rst
index 9b9fd60..a83d098 100644
--- a/docs/source/version_history.rst
+++ b/docs/source/version_history.rst
@@ -4,6 +4,36 @@
Version History
===============
+HEAD
+====
+
+New Features
+------------
+
+#. Sparse and dense covariance estimation (EXPERIMENTAL).
+#. C API
+#. Speeded up the use of loss functions > 17x.
+#. Use of Inner iterations can now be adaptively stopped. Iteration
+ and runtime statistics for inner iterations are not reported in
+ ``Solver::Summary`` and ``Solver::Summary::FullReport``.
+#. Add BlockRandomAccessCRSMatrix.
+
+
+Bug Fixes
+---------
+
+#. Various corrections and cleanups in the documentation.
+#. Change the path where CeresConfig.cmake is installed (Pablo Speciale)
+#. Minor erros in documentation (Pablo Speciale)
+#. Updated depend.cmake to follow CMake IF convention. (Joydeep Biswas)
+#. Stablize the schur ordering algorithm.
+#. Update license header in split.h.
+#. Enabling -O4 (link-time optimization) only if compiler/linker support it. (Alex Stewart)
+#. Consistent glog path across files.
+#. ceres-solver.spec: Use cleaner, more conventional Release string (Taylor Braun-Jones)
+#. Fix compile bug on RHEL6 due to missing header (Taylor Braun-Jones 3 weeks ago)
+
+
1.6.0
=====
diff --git a/include/ceres/autodiff_local_parameterization.h b/include/ceres/autodiff_local_parameterization.h
index 1099061..0aae6c7 100644
--- a/include/ceres/autodiff_local_parameterization.h
+++ b/include/ceres/autodiff_local_parameterization.h
@@ -58,7 +58,7 @@
//
// For example, Quaternions have a three dimensional local
// parameterization. It's plus operation can be implemented as (taken
-// from interncal/ceres/auto_diff_local_parameterization_test.cc)
+// from internal/ceres/auto_diff_local_parameterization_test.cc)
//
// struct QuaternionPlus {
// template<typename T>
diff --git a/include/ceres/loss_function.h b/include/ceres/loss_function.h
index 7b4af15..b99c184 100644
--- a/include/ceres/loss_function.h
+++ b/include/ceres/loss_function.h
@@ -347,19 +347,20 @@
//
// CostFunction* cost_function =
// new AutoDiffCostFunction < UW_Camera_Mapper, 2, 9, 3>(
-// new UW_Camera_Mapper(data->observations[2*i + 0],
-// data->observations[2*i + 1]));
+// new UW_Camera_Mapper(feature_x, feature_y));
//
// LossFunctionWrapper* loss_function(new HuberLoss(1.0), TAKE_OWNERSHIP);
//
// problem.AddResidualBlock(cost_function, loss_function, parameters);
//
// Solver::Options options;
-// scoped_ptr<Solver::Summary> summary1(Solve(problem, options));
+// Solger::Summary summary;
+//
+// Solve(options, &problem, &summary)
//
// loss_function->Reset(new HuberLoss(1.0), TAKE_OWNERSHIP);
//
-// scoped_ptr<Solver::Summary> summary2(Solve(problem, options));
+// Solve(options, &problem, &summary)
//
class LossFunctionWrapper : public LossFunction {
public:
diff --git a/include/ceres/numeric_diff_cost_function.h b/include/ceres/numeric_diff_cost_function.h
index 6c44b58..a47a66d 100644
--- a/include/ceres/numeric_diff_cost_function.h
+++ b/include/ceres/numeric_diff_cost_function.h
@@ -82,14 +82,14 @@
//
// CostFunction* cost_function
// = new NumericDiffCostFunction<MyScalarCostFunctor, CENTRAL, 1, 2, 2>(
-// new MyScalarCostFunctor(1.0)); ^ ^ ^
-// | | | |
-// Finite Differencing Scheme -+ | | |
-// Dimension of residual ----------+ | |
-// Dimension of x --------------------+ |
-// Dimension of y -----------------------+
+// new MyScalarCostFunctor(1.0)); ^ ^ ^ ^
+// | | | |
+// Finite Differencing Scheme -+ | | |
+// Dimension of residual ------------+ | |
+// Dimension of x ----------------------+ |
+// Dimension of y -------------------------+
//
-// In this example, there is usually an instance for each measumerent of k.
+// In this example, there is usually an instance for each measurement of k.
//
// In the instantiation above, the template parameters following
// "MyScalarCostFunctor", "1, 2, 2", describe the functor as computing
@@ -126,7 +126,7 @@
// To get a numerically differentiated cost function, define a
// subclass of CostFunction such that the Evaluate() function ignores
// the jacobian parameter. The numeric differentiation wrapper will
-// fill in the jacobian parameter if nececssary by repeatedly calling
+// fill in the jacobian parameter if necessary by repeatedly calling
// the Evaluate() function with small changes to the appropriate
// parameters, and computing the slope. For performance, the numeric
// differentiation wrapper class is templated on the concrete cost