commit | b4803778c31789509e6c5431e59e559740807e15 | [log] [tgz] |
---|---|---|

author | Sameer Agarwal <sameeragarwal@google.com> | Thu May 26 04:28:22 2022 -0700 |

committer | Sameer Agarwal <sameeragarwal@google.com> | Tue Jun 07 14:08:26 2022 -0700 |

tree | 7b9741ee1c715b1716dfe290010be19d6dc3649e | |

parent | 2e764df06f462f5432f63ce552eb301daa56cbbc [diff] |

Update documentation for linear_solver_ordering_type Also update obsolete documentation related to building and using sparse linear algebra libraries. Change-Id: I83682b43472e6a6ec4e4dad32fa21c089d518c06

diff --git a/docs/source/features.rst b/docs/source/features.rst index dacb327..d905415 100644 --- a/docs/source/features.rst +++ b/docs/source/features.rst

@@ -47,10 +47,10 @@ linear system. To this end Ceres ships with a variety of linear solvers - dense QR and dense Cholesky factorization (using `Eigen`_, `LAPACK`_ or `CUDA`_) for dense problems, sparse - Cholesky factorization (`SuiteSparse`_, `Apple's Accelerate`_, - `CXSparse`_ `Eigen`_) for large sparse problems, custom Schur - complement based dense, sparse, and iterative linear solvers for - `bundle adjustment`_ problems. + Cholesky factorization (`SuiteSparse`_, `Accelerate`_, `Eigen`_) + for large sparse problems, custom Schur complement based dense, + sparse, and iterative linear solvers for `bundle adjustment`_ + problems. - **Line Search Solvers** - When the problem size is so large that storing and factoring the Jacobian is not feasible or a low @@ -90,7 +90,6 @@ .. _SuiteSparse: http://www.cise.ufl.edu/research/sparse/SuiteSparse/ .. _Eigen: http://eigen.tuxfamily.org/ .. _LAPACK: http://www.netlib.org/lapack/ -.. _CXSparse: https://www.cise.ufl.edu/research/sparse/CXSparse/ .. _automatic: http://en.wikipedia.org/wiki/Automatic_differentiation .. _numeric: http://en.wikipedia.org/wiki/Numerical_differentiation .. _CUDA : https://developer.nvidia.com/cuda-toolkit

diff --git a/docs/source/installation.rst b/docs/source/installation.rst index 7cce067..5e6b0a7 100644 --- a/docs/source/installation.rst +++ b/docs/source/installation.rst

@@ -85,17 +85,6 @@ `CMake support for SuiteSparse <https://github.com/sergiud/SuiteSparse>`_ project. -- `CXSparse <http://faculty.cse.tamu.edu/davis/suitesparse.html>`_. - Similar to ``SuiteSparse`` but simpler and slower. CXSparse has - no dependencies on ``LAPACK`` and ``BLAS``. This makes for a simpler - build process and a smaller binary. **Optional** - - A CMake native version of CXSparse that can be compiled on a variety of - platforms (e.g., using Visual Studio, Xcode, MinGW, etc.) is also maintained - by the `CMake support for SuiteSparse - <https://github.com/sergiud/SuiteSparse>`_ project and is part of the - SuiteSparse package. - - `Apple's Accelerate sparse solvers <https://developer.apple.com/documentation/accelerate/sparse_solvers>`_. As of Xcode 9.0, Apple's Accelerate framework includes support for solving sparse linear systems across macOS, iOS et al. **Optional** @@ -162,7 +151,7 @@ sudo apt-get install libatlas-base-dev # Eigen3 sudo apt-get install libeigen3-dev - # SuiteSparse and CXSparse (optional) + # SuiteSparse (optional) sudo apt-get install libsuitesparse-dev We are now ready to build, test, and install Ceres. @@ -203,7 +192,7 @@ 5 1.803399e+04 5.33e+01 1.48e+04 1.23e+01 9.99e-01 8.33e+05 1 1.45e-01 1.08e+00 6 1.803390e+04 9.02e-02 6.35e+01 8.00e-01 1.00e+00 2.50e+06 1 1.50e-01 1.23e+00 - Solver Summary (v 2.1.0-eigen-(3.4.0)-lapack-suitesparse-(5.10.1)-cxsparse-(3.2.0)-acceleratesparse-eigensparse-no_openmp) + Solver Summary (v 2.1.0-eigen-(3.4.0)-lapack-suitesparse-(5.10.1)-acceleratesparse-eigensparse-no_openmp) Original Reduced Parameter blocks 22122 22122 @@ -291,7 +280,7 @@ brew install glog gflags # Eigen3 brew install eigen - # SuiteSparse and CXSparse + # SuiteSparse brew install suite-sparse We are now ready to build, test, and install Ceres. @@ -348,23 +337,13 @@ project. If you wish to use ``SuiteSparse``, follow their instructions for obtaining and building it. - #. (Experimental) ``CXSparse`` Previously CXSparse was not - available on Windows, there are now several ports that enable it - to be, including: `[1] <https://github.com/PetterS/CXSparse>`_ - and `[2] <https://github.com/TheFrenchLeaf/CXSparse>`_. If you - wish to use ``CXSparse``, follow their instructions for - obtaining and building it. - - #. Alternatively, Ceres Solver supports ``SuiteSparse`` binary - packages available for Visual Studio 2019 and 2022 provided by the `CMake - support for SuiteSparse <https://github.com/sergiud/SuiteSparse>`_ project - that also include `reference LAPACK <http://www.netlib.org/blas>`_ (and - BLAS). The binary packages are used by Ceres Solver for continuous testing - on Github. - - .. note:: - - The GPL licensed version of the SuiteSparse package is required. + Alternatively, Ceres Solver supports ``SuiteSparse`` binary + packages available for Visual Studio 2019 and 2022 provided by + the `CMake support for SuiteSparse + <https://github.com/sergiud/SuiteSparse>`_ project that also + include `reference LAPACK <http://www.netlib.org/blas>`_ (and + BLAS). The binary packages are used by Ceres Solver for + continuous testing on Github. #. Unpack the Ceres tarball into ``ceres``. For the tarball, you should get a directory inside ``ceres`` similar to @@ -427,8 +406,9 @@ optimal performance. #. CMake puts the resulting test binaries in ``ceres-bin/examples/Debug`` by default. - #. Without ``SuiteSparse``, only a subset of solvers is usable, - namely: ``DENSE_QR``, ``DENSE_SCHUR``, ``CGNR``, and ``ITERATIVE_SCHUR``. + #. Without a sparse linear algebra library, only a subset of + solvers is usable, namely: ``DENSE_QR``, ``DENSE_SCHUR``, + ``CGNR``, and ``ITERATIVE_SCHUR``. .. _section-android: @@ -527,9 +507,7 @@ The default CMake configuration builds a bare bones version of Ceres Solver that only depends on Eigen (``MINIGLOG`` is compiled into Ceres if it is used), this should be sufficient for solving small to -moderate sized problems (No ``SPARSE_SCHUR``, -``SPARSE_NORMAL_CHOLESKY`` linear solvers and no ``CLUSTER_JACOBI`` -and ``CLUSTER_TRIDIAGONAL`` preconditioners). +moderate sized problems. If you decide to use ``LAPACK`` and ``BLAS``, then you also need to add ``Accelerate.framework`` to your Xcode project's linking @@ -619,22 +597,14 @@ terms. Ceres requires some components that are only licensed under GPL/Commercial terms. -#. ``CXSPARSE [Default: ON]``: By default, Ceres will link to - ``CXSparse`` if all its dependencies are present. Turn this ``OFF`` - to build Ceres without ``CXSparse``. - - .. NOTE:: - - CXSparse is licensed under the LGPL. - #. ``ACCELERATESPARSE [Default: ON]``: By default, Ceres will link to Apple's Accelerate framework directly if a version of it is detected which supports solving sparse linear systems. Note that on Apple OSs Accelerate usually also provides the BLAS/LAPACK implementations and so would be linked against irrespective of the value of ``ACCELERATESPARSE``. -#. ``EIGENSPARSE [Default: ON]``: By default, Ceres will not use - Eigen's sparse Cholesky factorization. +#. ``EIGENSPARSE [Default: ON]``: By default, Ceres will use Eigen's + sparse Cholesky factorization. #. ``GFLAGS [Default: ON]``: Turn this ``OFF`` to build Ceres without ``gflags``. This will also prevent some of the example code from @@ -836,16 +806,14 @@ #. ``SuiteSparse``: Ceres built with SuiteSparse (``SUITESPARSE=ON``). -#. ``CXSparse``: Ceres built with CXSparse (``CXSPARSE=ON``). - #. ``AccelerateSparse``: Ceres built with Apple's Accelerate sparse solvers (``ACCELERATESPARSE=ON``). #. ``EigenSparse``: Ceres built with Eigen's sparse Cholesky factorization (``EIGENSPARSE=ON``). -#. ``SparseLinearAlgebraLibrary``: Ceres built with *at least one* sparse linear - algebra library. This is equivalent to ``SuiteSparse`` **OR** ``CXSparse`` - **OR** ``AccelerateSparse`` **OR** ``EigenSparse``. +#. ``SparseLinearAlgebraLibrary``: Ceres built with *at least one* + sparse linear algebra library. This is equivalent to + ``SuiteSparse`` **OR** ``AccelerateSparse`` **OR** ``EigenSparse``. #. ``SchurSpecializations``: Ceres built with Schur specializations (``SCHUR_SPECIALIZATIONS=ON``).

diff --git a/docs/source/nnls_solving.rst b/docs/source/nnls_solving.rst index 516b261..00e1e3e 100644 --- a/docs/source/nnls_solving.rst +++ b/docs/source/nnls_solving.rst

@@ -489,11 +489,13 @@ ``SPARSE_NORMAL_CHOLESKY``, as the name implies performs a sparse Cholesky factorization of the normal equations. This leads to -substantial savings in time and memory for large sparse -problems. Ceres uses the sparse Cholesky factorization routines in -Professor Tim Davis' ``SuiteSparse`` or ``CXSparse`` packages [Chen]_ -or the sparse Cholesky factorization algorithm in ``Eigen`` (which -incidentally is a port of the algorithm implemented inside ``CXSparse``) +substantial savings in time and memory for large sparse problems. The +use of this linear solver requires that Ceres is compiled with support +for at least one of: + + 1. SuiteSparse + 2. Apple's Accelerate framework. + 3. Eigen's sparse linear solvers. .. _section-cgnr: @@ -822,33 +824,31 @@ have a significant of impact on the efficiency and accuracy of the method. For example when doing sparse Cholesky factorization, there are matrices for which a good ordering will give a Cholesky factor -with :math:`O(n)` storage, where as a bad ordering will result in an +with :math:`O(n)` storage, whereas a bad ordering will result in an completely dense factor. Ceres allows the user to provide varying amounts of hints to the solver about the variable elimination ordering to use. This can range -from no hints, where the solver is free to decide the best ordering -based on the user's choices like the linear solver being used, to an -exact order in which the variables should be eliminated, and a variety -of possibilities in between. +from no hints, where the solver is free to decide the best possible +ordering based on the user's choices like the linear solver being +used, to an exact order in which the variables should be eliminated, +and a variety of possibilities in between. Instances of the :class:`ParameterBlockOrdering` class are used to communicate this information to Ceres. -Formally an ordering is an ordered partitioning of the parameter -blocks. Each parameter block belongs to exactly one group, and each -group has a unique integer associated with it, that determines its -order in the set of groups. We call these groups *Elimination Groups* +Formally an ordering is an ordered partitioning of the +parameter blocks, i.e, each parameter block belongs to exactly +one group, and each group has a unique non-negative integer +associated with it, that determines its order in the set of +groups. -Given such an ordering, Ceres ensures that the parameter blocks in the -lowest numbered elimination group are eliminated first, and then the -parameter blocks in the next lowest numbered elimination group and so -on. Within each elimination group, Ceres is free to order the -parameter blocks as it chooses. For example, consider the linear system + +e.g. Consider the linear system .. math:: - x + y &= 3\\ - 2x + 3y &= 7 + x + y &= 3 \\ + 2x + 3y &= 7 There are two ways in which it can be solved. First eliminating :math:`x` from the two equations, solving for :math:`y` and then back @@ -856,33 +856,87 @@ for :math:`x` and back substituting for :math:`y`. The user can construct three orderings here. -1. :math:`\{0: x\}, \{1: y\}` : Eliminate :math:`x` first. -2. :math:`\{0: y\}, \{1: x\}` : Eliminate :math:`y` first. -3. :math:`\{0: x, y\}` : Solver gets to decide the elimination order. +1. :math:`\{0: x\}, \{1: y\}` - eliminate :math:`x` first. +2. :math:`\{0: y\}, \{1: x\}` - eliminate :math:`y` first. +3. :math:`\{0: x, y\}` - Solver gets to decide the elimination order. -Thus, to have Ceres determine the ordering automatically using -heuristics, put all the variables in the same elimination group. The -identity of the group does not matter. This is the same as not -specifying an ordering at all. To control the ordering for every -variable, create an elimination group per variable, ordering them in -the desired order. +Thus, to have Ceres determine the ordering automatically, put all the +variables in group 0 and to control the ordering for every variable, +create groups :math:`0 \dots N-1`, one per variable, in the desired +order. -If the user is using one of the Schur solvers (``DENSE_SCHUR``, -``SPARSE_SCHUR``, ``ITERATIVE_SCHUR``) and chooses to specify an -ordering, it must have one important property. The lowest numbered -elimination group must form an independent set in the graph -corresponding to the Hessian, or in other words, no two parameter -blocks in in the first elimination group should co-occur in the same -residual block. For the best performance, this elimination group -should be as large as possible. For standard bundle adjustment -problems, this corresponds to the first elimination group containing -all the 3d points, and the second containing the all the cameras -parameter blocks. +``linear_solver_ordering == nullptr`` and an ordering where all the +parameter blocks are in one elimination group mean the same thing - +the solver is free to choose what it thinks is the best elimination +ordering. Therefore in the following we will only consider the case +where ``linear_solver_ordering != nullptr``. + +The exact interpretation of the ``linear_solver_ordering depeneds`` on +the values of ``Solver::Options::linear_solver_ordering_type``, +``Solver::Options::linear_solver_type``, +``Solver::Options::preconditioner_type`` and +``Solver::Options::sparse_linear_algebra_type`` as we will explain +below. + +Bundle Adjustment +----------------- + +If the user is using one of the Schur solvers (DENSE_SCHUR, +SPARSE_SCHUR, ITERATIVE_SCHUR) and chooses to specify an ordering, it +must have one important property. The lowest numbered elimination +group must form an independent set in the graph corresponding to the +Hessian, or in other words, no two parameter blocks in the first +elimination group should co-occur in the same residual block. For the +best performance, this elimination group should be as large as +possible. For standard bundle adjustment problems, this corresponds to +the first elimination group containing all the 3d points, and the +second containing the parameter blocks for all the cameras. If the user leaves the choice to Ceres, then the solver uses an approximate maximum independent set algorithm to identify the first elimination group [LiSaad]_. +sparse_linear_algebra_library_type = SUITE_SPARSE +------------------------------------------------- + +**linear_solver_ordering_type = AMD** + +A constrained Approximate Minimum Degree (CAMD) ordering used where +the parameter blocks in the lowest numbered group are eliminated +first, and then the parameter blocks in the next lowest numbered group +and so on. Within each group, CAMD is free to order the parameter blocks +as it chooses. + +**linear_solver_ordering_type = NESDIS** + +a. ``linear_solver_type = SPARSE_NORMAL_CHOLESKY`` or + ``linear_solver_type = CGNR`` and ``preconditioner_type = SUBSET`` + + The value of linear_solver_ordering is ignored and a Nested + Dissection algorithm is used to compute a fill reducing ordering. + +b. ``linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR`` + + ONLY the lowest group are used to compute the Schur complement, and + Nested Dissection is used to compute a fill reducing ordering for + the Schur Complement (or its preconditioner). + +sparse_linear_algebra_library_type = EIGEN_SPARSE or ACCELERATE_SPARSE +---------------------------------------------------------------------- + +a. ``linear_solver_type = SPARSE_NORMAL_CHOLESKY`` or + ``linear_solver_type = CGNR`` and ``preconditioner_type = SUBSET`` + + The value of linear_solver_ordering is ignored and ``AMD`` or + ``NESDIS`` is used to compute a fill reducing ordering as requested + by the user. + +b. ``linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR`` + + ONLY the lowest group are used to compute the Schur complement, and + ``AMD`` or ``NESID`` is used to compute a fill reducing ordering + for the Schur Complement (or its preconditioner). + .. _section-solver-options: :class:`Solver::Options` @@ -1261,7 +1315,7 @@ Type of linear solver used to compute the solution to the linear least squares problem in each iteration of the Levenberg-Marquardt algorithm. If Ceres is built with support for ``SuiteSparse`` or - ``CXSparse`` or ``Eigen``'s sparse Cholesky factorization, the + ``Accelerate`` or ``Eigen``'s sparse Cholesky factorization, the default is ``SPARSE_NORMAL_CHOLESKY``, it is ``DENSE_QR`` otherwise. @@ -1340,48 +1394,58 @@ .. member:: SparseLinearAlgebraLibrary Solver::Options::sparse_linear_algebra_library_type Default: The highest available according to: ``SUITE_SPARSE`` > - ``CX_SPARSE`` > ``EIGEN_SPARSE`` > ``NO_SPARSE`` + ``ACCELERATE_SPARSE`` > ``EIGEN_SPARSE`` > ``NO_SPARSE`` Ceres supports the use of three sparse linear algebra libraries, ``SuiteSparse``, which is enabled by setting this parameter to - ``SUITE_SPARSE``, ``CXSparse``, which can be selected by setting - this parameter to ``CX_SPARSE`` and ``Eigen`` which is enabled by - setting this parameter to ``EIGEN_SPARSE``. Lastly, ``NO_SPARSE`` - means that no sparse linear solver should be used; note that this is - irrespective of whether Ceres was compiled with support for one. + ``SUITE_SPARSE``, ``Acclerate``, which can be selected by setting + this parameter to ``ACCELERATE_SPARSE`` and ``Eigen`` which is + enabled by setting this parameter to ``EIGEN_SPARSE``. Lastly, + ``NO_SPARSE`` means that no sparse linear solver should be used; + note that this is irrespective of whether Ceres was compiled with + support for one. - ``SuiteSparse`` is a sophisticated and complex sparse linear - algebra library and should be used in general. + ``SuiteSparse`` is a sophisticated sparse linear algebra library + and should be used in general. On MacOS you may want to use the + ``Accelerate`` framework. If your needs/platforms prevent you from using ``SuiteSparse``, - consider using ``CXSparse``, which is a much smaller, easier to - build library. As can be expected, its performance on large - problems is not comparable to that of ``SuiteSparse``. + consider using the sparse linear algebra routines in ``Eigen``. The + sparse Cholesky algorithms currently included with ``Eigen`` are + not as sophisticated as the ones in ``SuiteSparse`` and + ``Accelerate`` and as a result its performance is considerably + worse. - Last but not the least you can use the sparse linear algebra - routines in ``Eigen``. Currently the performance of this library is - the poorest of the three. But this should change in the near - future. +.. member:: LinearSolverOrderingType Solver::Options::linear_solver_ordering_type - Another thing to consider here is that the sparse Cholesky - factorization libraries in Eigen are licensed under ``LGPL`` and - building Ceres with support for ``EIGEN_SPARSE`` will result in an - LGPL licensed library (since the corresponding code from Eigen is - compiled into the library). + Default: ``AMD`` - The upside is that you do not need to build and link to an external - library to use ``EIGEN_SPARSE``. + The order in which variables are eliminated in a linear solver can + have a significant impact on the efficiency and accuracy of the + method. e.g., when doing sparse Cholesky factorization, there are + matrices for which a good ordering will give a Cholesky factor + with :math:`O(n)` storage, where as a bad ordering will result in + an completely dense factor. + Sparse direct solvers like ``SPARSE_NORMAL_CHOLESKY`` and + ``SPARSE_SCHUR`` use a fill reducing ordering of the columns and + rows of the matrix being factorized before computing the numeric + factorization. + + This enum controls the type of algorithm used to compute this fill + reducing ordering. There is no single algorithm that works on all + matrices, so determining which algorithm works better is a matter + of empirical experimentation. .. member:: shared_ptr<ParameterBlockOrdering> Solver::Options::linear_solver_ordering - Default: ``NULL`` + Default: ``nullptr`` An instance of the ordering object informs the solver about the desired order in which parameter blocks should be eliminated by the linear solvers. - If ``NULL``, the solver is free to choose an ordering that it + If ``nullptr``, the solver is free to choose an ordering that it thinks is best. See :ref:`section-ordering` for more details. @@ -1526,14 +1590,14 @@ .. member:: shared_ptr<ParameterBlockOrdering> Solver::Options::inner_iteration_ordering - Default: ``NULL`` + Default: ``nullptr`` If :member:`Solver::Options::use_inner_iterations` true, then the user has two choices. 1. Let the solver heuristically decide which parameter blocks to optimize in each inner iteration. To do this, set - :member:`Solver::Options::inner_iteration_ordering` to ``NULL``. + :member:`Solver::Options::inner_iteration_ordering` to ``nullptr``. 2. Specify a collection of of ordered independent sets. The lower numbered groups are optimized before the higher number groups

diff --git a/docs/source/solving_faqs.rst b/docs/source/solving_faqs.rst index 64604c4..3842e4d 100644 --- a/docs/source/solving_faqs.rst +++ b/docs/source/solving_faqs.rst

@@ -23,16 +23,13 @@ 2. For general sparse problems (i.e., the Jacobian matrix has a substantial number of zeros) use - ``SPARSE_NORMAL_CHOLESKY``. This requires that you have - ``SuiteSparse`` or ``CXSparse`` installed. + ``SPARSE_NORMAL_CHOLESKY``. 3. For bundle adjustment problems with up to a hundred or so cameras, use ``DENSE_SCHUR``. 4. For larger bundle adjustment problems with sparse Schur - Complement/Reduced camera matrices use ``SPARSE_SCHUR``. This - requires that you build Ceres with support for ``SuiteSparse``, - ``CXSparse`` or Eigen's sparse linear algebra libraries. + Complement/Reduced camera matrices use ``SPARSE_SCHUR``. If you do not have access to these libraries for whatever reason, ``ITERATIVE_SCHUR`` with ``SCHUR_JACOBI`` is an

diff --git a/include/ceres/solver.h b/include/ceres/solver.h index 6d58183..097276b 100644 --- a/include/ceres/solver.h +++ b/include/ceres/solver.h

@@ -382,10 +382,10 @@ SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type = #if !defined(CERES_NO_SUITESPARSE) SUITE_SPARSE; -#elif defined(CERES_USE_EIGEN_SPARSE) - EIGEN_SPARSE; #elif !defined(CERES_NO_ACCELERATE_SPARSE) ACCELERATE_SPARSE; +#elif defined(CERES_USE_EIGEN_SPARSE) + EIGEN_SPARSE; #else NO_SPARSE; #endif @@ -407,143 +407,115 @@ // that works on all matrices, so determining which algorithm // works better is a matter of empirical experimentation. // - // Implementation status: - // - // AMD works for SUITE_SPARSE, EIGEN_SPARSE & - // ACCELERATE_SPARSE. - // - // NESDIS currently works for SUITE_SPARSE when using - // SPARSE_NORMAL_CHOLESKY, SPARSE_SCHUR, CGNR + SUBSET, - // ITERATIVE_SCHUR + CLUSTER_JACOBI, ITERATIVE_SCHUR + - // CLUSTER_TRIDIAGONAL linear solvers. + // The exact behaviour of this setting is affected by the value of + // linear_solver_ordering as described below. LinearSolverOrderingType linear_solver_ordering_type = AMD; - // Ceres allows the user to provide varying amounts of hints to - // the solver about the variable elimination ordering to use. This - // can range from no hints, where the solver is free to decide the - // best possible ordering based on the user's choices like the - // linear solver being used, to an exact order in which the - // variables should be eliminated, and a variety of possibilities - // in between. + // Besides specifying the fill reducing ordering via + // linear_solver_ordering_type, Ceres allows the user to provide varying + // amounts of hints to the linear solver about the variable elimination + // ordering to use. This can range from no hints, where the solver is free + // to decide the best possible ordering based on the user's choices like the + // linear solver being used, to an exact order in which the variables should + // be eliminated, and a variety of possibilities in between. // - // Instances of the ParameterBlockOrdering class are used to - // communicate this information to Ceres. + // Instances of the ParameterBlockOrdering class are used to communicate + // this information to Ceres. // - // Formally an ordering is an ordered partitioning of the - // parameter blocks, i.e, each parameter block belongs to exactly - // one group, and each group has a unique non-negative integer - // associated with it, that determines its order in the set of - // groups. - // - // The exact interpretation of this information depends on the - // values of linear_solver_ordering_type and - // linear_solver_type/preconditioner_type and - // sparse_linear_algebra_type. - // - // sparse_linear_algebra_library_type != SUITE_SPARSE - // ================================================== - // - // The value of linear_solver_ordering_type does not matter and: - // - // a. linear_solver_type = SPARSE_NORMAL_CHOLESKY or - // linear_solver_type = CGNR and preconditioner_type = SUBSET - // - // then the value of linear_solver_ordering is ignored. - // - // b. linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR - // - // then if linear_solver_ordering is non-null, then the lowest - // number group is used as the first elimination group to compute - // the Schur complement. All other groups are ignored. - // - // sparse_linear_algebra_library_type == SUITE_SPARSE - // ================================================== - // - // linear_solver_ordering_type = AMD - // --------------------------------- - // - // linear_solver_type = SPARSE_NORMAL_CHOLESKY or - // linear_solver_type = CGNR and preconditioner_type = SUBSET - // - // if linear_solver_ordering = nullptr, then Ceres assumes that - // all parameter blocks are in the same elimination group and uses - // the Approximate Minimum Degree algorithm to compute a fill - // reducing ordering. - // - // If linear_solver_order is not null, then a Constrained - // Approximate Minimum Degree (CAMD) ordering used where the - // parameter blocks in the lowest numbered group are eliminated - // first, and then the parameter blocks in the next lowest - // numbered group and so on. Within each group, CAMD free to order - // the parameter blocks as it chooses. + // Formally an ordering is an ordered partitioning of the parameter blocks, + // i.e, each parameter block belongs to exactly one group, and each group + // has a unique non-negative integer associated with it, that determines its + // order in the set of groups. // // e.g. Consider the linear system // // x + y = 3 // 2x + 3y = 7 // - // There are two ways in which it can be solved. First eliminating x - // from the two equations, solving for y and then back substituting - // for x, or first eliminating y, solving for x and back substituting - // for y. The user can construct three orderings here. + // There are two ways in which it can be solved. First eliminating x from + // the two equations, solving for y and then back substituting for x, or + // first eliminating y, solving for x and back substituting for y. The user + // can construct three orderings here. // // {0: x}, {1: y} - eliminate x first. // {0: y}, {1: x} - eliminate y first. // {0: x, y} - Solver gets to decide the elimination order. // - // Thus, to have Ceres determine the ordering automatically, put - // all the variables in group 0 and to control the ordering for - // every variable, create groups 0..N-1, one per variable, in the - // desired order. + // Thus, to have Ceres determine the ordering automatically, put all the + // variables in group 0 and to control the ordering for every variable + // create groups 0 ... N-1, one per variable, in the desired + // order. // - // b. linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR + // linear_solver_ordering == nullptr and an ordering where all the parameter + // blocks are in one elimination group mean the same thing - the solver is + // free to choose what it thinks is the best elimination ordering. Therefore + // in the following we will only consider the case where + // linear_solver_ordering is nullptr. // - // If linear_solver_ordering is null then an greedy maximum - // independent set algorithm is used to compute the first - // elimination group, and AMD is applied to the corresponding - // Schur complement matrix or a subset thereof used that is used - // as a preconditioner. + // The exact interpretation of this information depends on the values of + // linear_solver_ordering_type and linear_solver_type/preconditioner_type + // and sparse_linear_algebra_type. // - // If linear_solver_ordering is not null, then CAMD is used to - // compute a fill reducing ordering. + // Bundle Adjustment + // ================= // - // linear_solver_ordering = NESDIS - // ------------------------------- + // If the user is using one of the Schur solvers (DENSE_SCHUR, + // SPARSE_SCHUR, ITERATIVE_SCHUR) and chooses to specify an + // ordering, it must have one important property. The lowest + // numbered elimination group must form an independent set in the + // graph corresponding to the Hessian, or in other words, no two + // parameter blocks in in the first elimination group should + // co-occur in the same residual block. For the best performance, + // this elimination group should be as large as possible. For + // standard bundle adjustment problems, this corresponds to the + // first elimination group containing all the 3d points, and the + // second containing the all the cameras parameter blocks. + // + // If the user leaves the choice to Ceres, then the solver uses an + // approximate maximum independent set algorithm to identify the first + // elimination group. + // + // sparse_linear_algebra_library_type = SUITE_SPARSE + // ================================================= + // + // linear_solver_ordering_type = AMD + // --------------------------------- + // + // A Constrained Approximate Minimum Degree (CAMD) ordering used where the + // parameter blocks in the lowest numbered group are eliminated first, and + // then the parameter blocks in the next lowest numbered group and so + // on. Within each group, CAMD free to order the parameter blocks as it + // chooses. + // + // linear_solver_ordering_type = NESDIS + // ------------------------------------- // // a. linear_solver_type = SPARSE_NORMAL_CHOLESKY or // linear_solver_type = CGNR and preconditioner_type = SUBSET // - // then the value of linear_solver_ordering is ignored and a - // Nested Dissection algorithm is used to compute a fill reducing - // ordering. + // The value of linear_solver_ordering is ignored and a Nested Dissection + // algorithm is used to compute a fill reducing ordering. // // b. linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR // - // If linear_solver_ordering is null then an greedy maximum - // independent set algorithm is used to compute the first - // elimination group, and Nested Dissection is applied to the - // corresponding Schur complement matrix or a subset thereof used - // that is used as a preconditioner. + // ONLY the lowest group are used to compute the Schur complement, and + // Nested Dissection is used to compute a fill reducing ordering for the + // Schur Complement (or its preconditioner). // - // If linear_solver_ordering is not null, the parameter blocks in - // the lowest numbered group are eliminated to compute the Schur - // complement. All other groups are ignored and Nested Dissection - // is applied to the corresponding Schur complement matrix or a - // subset thereof used that is used as a preconditioner. + // sparse_linear_algebra_library_type = EIGEN_SPARSE or ACCELERATE_SPARSE + // ====================================================================== // - // Bundle Adjustment - // ----------------- + // a. linear_solver_type = SPARSE_NORMAL_CHOLESKY or + // linear_solver_type = CGNR and preconditioner_type = SUBSET // - // A particular case of interest is bundle adjustment, where the - // user has two options. The default is not specifying ordering at - // all; in this case the solver will see that the user wants to - // use a Schur type solver and figure out an elimination ordering. + // then the value of linear_solver_ordering is ignored and AMD or NESDIS is + // used to compute a fill reducing ordering as requested by the user. // - // But if the user already knows what parameter blocks are points and - // what are cameras, they can save preprocessing time by partitioning - // the parameter blocks into two groups, one for the points and one - // for the cameras, where the group containing the points has an id - // smaller than the group containing cameras. + // b. linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR + // + // ONLY the lowest group are used to compute the Schur complement, and AMD + // or NESDIS is used to compute a fill reducing ordering for the Schur + // Complement (or its preconditioner). std::shared_ptr<ParameterBlockOrdering> linear_solver_ordering; // Use an explicitly computed Schur complement matrix with