Refactor Covariance::Options::algorithm_type.

THIS IS AN API BREAKING CHANGE.

Decouple the algorithm from the sparse linear algebra
library being used to perform the computation.

Before this change

Covariance::AlgorithmType had values

DENSE_SVD
EIGEN_SPARSE_QR
SUITE_SPARSE_QR

This has been replaced by two enums now.

Covariance::Options::sparse_linear_algebra_library_type
which can take values EIGEN_SPARSE, SUITE_SPARSE or CX_SPARSE.
The last one is currently not supported.

And Covariance::Options::algorithm_type takes values

DENSE_SVD
SPARSE_QR

This sets the stage for future extensions of the covariance
computation algorithm.

Also as part of this change, the covariance computation chapter
has been made a top level chapter on its own instead of being
buried deep inside the Solving Non-linear Least Squares problem.

Change-Id: Ibfbf60902d8d17694d9ff585047a5a57d329ab22
diff --git a/docs/source/index.rst b/docs/source/index.rst
index ae3be3d..d72368f 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -27,6 +27,7 @@
    derivatives
    nnls_modeling
    nnls_solving
+   nnls_covariance
    gradient_solver
    faqs
    users
diff --git a/docs/source/nnls_solving.rst b/docs/source/nnls_solving.rst
index 5863ff0..fed0b3e 100644
--- a/docs/source/nnls_solving.rst
+++ b/docs/source/nnls_solving.rst
@@ -2255,365 +2255,3 @@
 
    If the type of the line search direction is `LBFGS`, then this
    indicates the rank of the Hessian approximation.
-
-Covariance Estimation
-=====================
-
-Background
-----------
-
-One way to assess the quality of the solution returned by a
-non-linear least squares solve is to analyze the covariance of the
-solution.
-
-Let us consider the non-linear regression problem
-
-.. math::  y = f(x) + N(0, I)
-
-i.e., the observation :math:`y` is a random non-linear function of the
-independent variable :math:`x` with mean :math:`f(x)` and identity
-covariance. Then the maximum likelihood estimate of :math:`x` given
-observations :math:`y` is the solution to the non-linear least squares
-problem:
-
-.. math:: x^* = \arg \min_x \|f(x)\|^2
-
-And the covariance of :math:`x^*` is given by
-
-.. math:: C(x^*) = \left(J'(x^*)J(x^*)\right)^{-1}
-
-Here :math:`J(x^*)` is the Jacobian of :math:`f` at :math:`x^*`. The
-above formula assumes that :math:`J(x^*)` has full column rank.
-
-If :math:`J(x^*)` is rank deficient, then the covariance matrix :math:`C(x^*)`
-is also rank deficient and is given by the Moore-Penrose pseudo inverse.
-
-.. math:: C(x^*) =  \left(J'(x^*)J(x^*)\right)^{\dagger}
-
-Note that in the above, we assumed that the covariance matrix for
-:math:`y` was identity. This is an important assumption. If this is
-not the case and we have
-
-.. math:: y = f(x) + N(0, S)
-
-Where :math:`S` is a positive semi-definite matrix denoting the
-covariance of :math:`y`, then the maximum likelihood problem to be
-solved is
-
-.. math:: x^* = \arg \min_x f'(x) S^{-1} f(x)
-
-and the corresponding covariance estimate of :math:`x^*` is given by
-
-.. math:: C(x^*) = \left(J'(x^*) S^{-1} J(x^*)\right)^{-1}
-
-So, if it is the case that the observations being fitted to have a
-covariance matrix not equal to identity, then it is the user's
-responsibility that the corresponding cost functions are correctly
-scaled, e.g. in the above case the cost function for this problem
-should evaluate :math:`S^{-1/2} f(x)` instead of just :math:`f(x)`,
-where :math:`S^{-1/2}` is the inverse square root of the covariance
-matrix :math:`S`.
-
-Gauge Invariance
-----------------
-
-In structure from motion (3D reconstruction) problems, the
-reconstruction is ambiguous upto a similarity transform. This is
-known as a *Gauge Ambiguity*. Handling Gauges correctly requires the
-use of SVD or custom inversion algorithms. For small problems the
-user can use the dense algorithm. For more details see the work of
-Kanatani & Morris [KanataniMorris]_.
-
-
-:class:`Covariance`
--------------------
-
-:class:`Covariance` allows the user to evaluate the covariance for a
-non-linear least squares problem and provides random access to its
-blocks. The computation assumes that the cost functions compute
-residuals such that their covariance is identity.
-
-Since the computation of the covariance matrix requires computing the
-inverse of a potentially large matrix, this can involve a rather large
-amount of time and memory. However, it is usually the case that the
-user is only interested in a small part of the covariance
-matrix. Quite often just the block diagonal. :class:`Covariance`
-allows the user to specify the parts of the covariance matrix that she
-is interested in and then uses this information to only compute and
-store those parts of the covariance matrix.
-
-Rank of the Jacobian
---------------------
-
-As we noted above, if the Jacobian is rank deficient, then the inverse
-of :math:`J'J` is not defined and instead a pseudo inverse needs to be
-computed.
-
-The rank deficiency in :math:`J` can be *structural* -- columns
-which are always known to be zero or *numerical* -- depending on the
-exact values in the Jacobian.
-
-Structural rank deficiency occurs when the problem contains parameter
-blocks that are constant. This class correctly handles structural rank
-deficiency like that.
-
-Numerical rank deficiency, where the rank of the matrix cannot be
-predicted by its sparsity structure and requires looking at its
-numerical values is more complicated. Here again there are two
-cases.
-
-  a. The rank deficiency arises from overparameterization. e.g., a
-     four dimensional quaternion used to parameterize :math:`SO(3)`,
-     which is a three dimensional manifold. In cases like this, the
-     user should use an appropriate
-     :class:`LocalParameterization`. Not only will this lead to better
-     numerical behaviour of the Solver, it will also expose the rank
-     deficiency to the :class:`Covariance` object so that it can
-     handle it correctly.
-
-  b. More general numerical rank deficiency in the Jacobian requires
-     the computation of the so called Singular Value Decomposition
-     (SVD) of :math:`J'J`. We do not know how to do this for large
-     sparse matrices efficiently. For small and moderate sized
-     problems this is done using dense linear algebra.
-
-
-:class:`Covariance::Options`
-
-.. class:: Covariance::Options
-
-.. member:: int Covariance::Options::num_threads
-
-   Default: ``1``
-
-   Number of threads to be used for evaluating the Jacobian and
-   estimation of covariance.
-
-.. member:: CovarianceAlgorithmType Covariance::Options::algorithm_type
-
-   Default: ``SUITE_SPARSE_QR`` if ``SuiteSparseQR`` is installed and
-   ``EIGEN_SPARSE_QR`` otherwise.
-
-   Ceres supports three different algorithms for covariance
-   estimation, which represent different tradeoffs in speed, accuracy
-   and reliability.
-
-   1. ``DENSE_SVD`` uses ``Eigen``'s ``JacobiSVD`` to perform the
-      computations. It computes the singular value decomposition
-
-      .. math::   U S V^\top = J
-
-      and then uses it to compute the pseudo inverse of J'J as
-
-      .. math::   (J'J)^{\dagger} = V  S^{\dagger}  V^\top
-
-      It is an accurate but slow method and should only be used for
-      small to moderate sized problems. It can handle full-rank as
-      well as rank deficient Jacobians.
-
-   2. ``EIGEN_SPARSE_QR`` uses the sparse QR factorization algorithm
-      in ``Eigen`` to compute the decomposition
-
-       .. math::
-
-          QR &= J\\
-          \left(J^\top J\right)^{-1} &= \left(R^\top R\right)^{-1}
-
-      It is a moderately fast algorithm for sparse matrices.
-
-   3. ``SUITE_SPARSE_QR`` uses the sparse QR factorization algorithm
-      in ``SuiteSparse``. It uses dense linear algebra and is multi
-      threaded, so for large sparse sparse matrices it is
-      significantly faster than ``EIGEN_SPARSE_QR``.
-
-   Neither ``EIGEN_SPARSE_QR`` nor ``SUITE_SPARSE_QR`` are capable of
-   computing the covariance if the Jacobian is rank deficient.
-
-.. member:: int Covariance::Options::min_reciprocal_condition_number
-
-   Default: :math:`10^{-14}`
-
-   If the Jacobian matrix is near singular, then inverting :math:`J'J`
-   will result in unreliable results, e.g, if
-
-   .. math::
-
-     J = \begin{bmatrix}
-         1.0& 1.0 \\
-         1.0& 1.0000001
-         \end{bmatrix}
-
-   which is essentially a rank deficient matrix, we have
-
-   .. math::
-
-     (J'J)^{-1} = \begin{bmatrix}
-                  2.0471e+14&  -2.0471e+14 \\
-                  -2.0471e+14   2.0471e+14
-                  \end{bmatrix}
-
-
-   This is not a useful result. Therefore, by default
-   :func:`Covariance::Compute` will return ``false`` if a rank
-   deficient Jacobian is encountered. How rank deficiency is detected
-   depends on the algorithm being used.
-
-   1. ``DENSE_SVD``
-
-      .. math:: \frac{\sigma_{\text{min}}}{\sigma_{\text{max}}}  < \sqrt{\text{min_reciprocal_condition_number}}
-
-      where :math:`\sigma_{\text{min}}` and
-      :math:`\sigma_{\text{max}}` are the minimum and maxiumum
-      singular values of :math:`J` respectively.
-
-   2. ``EIGEN_SPARSE_QR`` and ``SUITE_SPARSE_QR``
-
-       .. math:: \operatorname{rank}(J) < \operatorname{num\_col}(J)
-
-       Here :\math:`\operatorname{rank}(J)` is the estimate of the
-       rank of `J` returned by the sparse QR factorization
-       algorithm. It is a fairly reliable indication of rank
-       deficiency.
-
-.. member:: int Covariance::Options::null_space_rank
-
-    When using ``DENSE_SVD``, the user has more control in dealing
-    with singular and near singular covariance matrices.
-
-    As mentioned above, when the covariance matrix is near singular,
-    instead of computing the inverse of :math:`J'J`, the Moore-Penrose
-    pseudoinverse of :math:`J'J` should be computed.
-
-    If :math:`J'J` has the eigen decomposition :math:`(\lambda_i,
-    e_i)`, where :math:`lambda_i` is the :math:`i^\textrm{th}`
-    eigenvalue and :math:`e_i` is the corresponding eigenvector, then
-    the inverse of :math:`J'J` is
-
-    .. math:: (J'J)^{-1} = \sum_i \frac{1}{\lambda_i} e_i e_i'
-
-    and computing the pseudo inverse involves dropping terms from this
-    sum that correspond to small eigenvalues.
-
-    How terms are dropped is controlled by
-    `min_reciprocal_condition_number` and `null_space_rank`.
-
-    If `null_space_rank` is non-negative, then the smallest
-    `null_space_rank` eigenvalue/eigenvectors are dropped irrespective
-    of the magnitude of :math:`\lambda_i`. If the ratio of the
-    smallest non-zero eigenvalue to the largest eigenvalue in the
-    truncated matrix is still below min_reciprocal_condition_number,
-    then the `Covariance::Compute()` will fail and return `false`.
-
-    Setting `null_space_rank = -1` drops all terms for which
-
-    .. math::  \frac{\lambda_i}{\lambda_{\textrm{max}}} < \textrm{min_reciprocal_condition_number}
-
-    This option has no effect on ``EIGEN_SPARSE_QR`` and
-    ``SUITE_SPARSE_QR``.
-
-.. member:: bool Covariance::Options::apply_loss_function
-
-   Default: `true`
-
-   Even though the residual blocks in the problem may contain loss
-   functions, setting ``apply_loss_function`` to false will turn off
-   the application of the loss function to the output of the cost
-   function and in turn its effect on the covariance.
-
-.. class:: Covariance
-
-   :class:`Covariance::Options` as the name implies is used to control
-   the covariance estimation algorithm. Covariance estimation is a
-   complicated and numerically sensitive procedure. Please read the
-   entire documentation for :class:`Covariance::Options` before using
-   :class:`Covariance`.
-
-.. function:: bool Covariance::Compute(const vector<pair<const double*, const double*> >& covariance_blocks, Problem* problem)
-
-   Compute a part of the covariance matrix.
-
-   The vector ``covariance_blocks``, indexes into the covariance
-   matrix block-wise using pairs of parameter blocks. This allows the
-   covariance estimation algorithm to only compute and store these
-   blocks.
-
-   Since the covariance matrix is symmetric, if the user passes
-   ``<block1, block2>``, then ``GetCovarianceBlock`` can be called with
-   ``block1``, ``block2`` as well as ``block2``, ``block1``.
-
-   ``covariance_blocks`` cannot contain duplicates. Bad things will
-   happen if they do.
-
-   Note that the list of ``covariance_blocks`` is only used to
-   determine what parts of the covariance matrix are computed. The
-   full Jacobian is used to do the computation, i.e. they do not have
-   an impact on what part of the Jacobian is used for computation.
-
-   The return value indicates the success or failure of the covariance
-   computation. Please see the documentation for
-   :class:`Covariance::Options` for more on the conditions under which
-   this function returns ``false``.
-
-.. function:: bool GetCovarianceBlock(const double* parameter_block1, const double* parameter_block2, double* covariance_block) const
-
-   Return the block of the cross-covariance matrix corresponding to
-   ``parameter_block1`` and ``parameter_block2``.
-
-   Compute must be called before the first call to ``GetCovarianceBlock``
-   and the pair ``<parameter_block1, parameter_block2>`` OR the pair
-   ``<parameter_block2, parameter_block1>`` must have been present in the
-   vector covariance_blocks when ``Compute`` was called. Otherwise
-   ``GetCovarianceBlock`` will return false.
-
-   ``covariance_block`` must point to a memory location that can store
-   a ``parameter_block1_size x parameter_block2_size`` matrix. The
-   returned covariance will be a row-major matrix.
-
-.. function:: bool GetCovarianceBlockInTangentSpace(const double* parameter_block1, const double* parameter_block2, double* covariance_block) const
-
-   Return the block of the cross-covariance matrix corresponding to
-   ``parameter_block1`` and ``parameter_block2``.
-   Returns cross-covariance in the tangent space if a local
-   parameterization is associated with either parameter block;
-   else returns cross-covariance in the ambient space.
-
-   Compute must be called before the first call to ``GetCovarianceBlock``
-   and the pair ``<parameter_block1, parameter_block2>`` OR the pair
-   ``<parameter_block2, parameter_block1>`` must have been present in the
-   vector covariance_blocks when ``Compute`` was called. Otherwise
-   ``GetCovarianceBlock`` will return false.
-
-   ``covariance_block`` must point to a memory location that can store
-   a ``parameter_block1_local_size x parameter_block2_local_size`` matrix. The
-   returned covariance will be a row-major matrix.
-
-Example Usage
--------------
-
-.. code-block:: c++
-
- double x[3];
- double y[2];
-
- Problem problem;
- problem.AddParameterBlock(x, 3);
- problem.AddParameterBlock(y, 2);
- <Build Problem>
- <Solve Problem>
-
- Covariance::Options options;
- Covariance covariance(options);
-
- vector<pair<const double*, const double*> > covariance_blocks;
- covariance_blocks.push_back(make_pair(x, x));
- covariance_blocks.push_back(make_pair(y, y));
- covariance_blocks.push_back(make_pair(x, y));
-
- CHECK(covariance.Compute(covariance_blocks, &problem));
-
- double covariance_xx[3 * 3];
- double covariance_yy[2 * 2];
- double covariance_xy[3 * 2];
- covariance.GetCovarianceBlock(x, x, covariance_xx)
- covariance.GetCovarianceBlock(y, y, covariance_yy)
- covariance.GetCovarianceBlock(x, y, covariance_xy)
diff --git a/include/ceres/covariance.h b/include/ceres/covariance.h
index 930f96c..0538522 100644
--- a/include/ceres/covariance.h
+++ b/include/ceres/covariance.h
@@ -200,19 +200,28 @@
 class CERES_EXPORT Covariance {
  public:
   struct CERES_EXPORT Options {
-    Options()
-#ifndef CERES_NO_SUITESPARSE
-        : algorithm_type(SUITE_SPARSE_QR),
-#else
-        : algorithm_type(EIGEN_SPARSE_QR),
+    Options() {
+      algorithm_type = SPARSE_QR;
+
+      // Eigen's QR factorization is always available.
+      sparse_linear_algebra_library_type = EIGEN_SPARSE;
+#if !defined(CERES_NO_SUITESPARSE)
+      sparse_linear_algebra_library_type = SUITE_SPARSE;
 #endif
-          min_reciprocal_condition_number(1e-14),
-          null_space_rank(0),
-          num_threads(1),
-          apply_loss_function(true) {
+
+      min_reciprocal_condition_number = 1e-14;
+      null_space_rank = 0;
+      num_threads = 1;
+      apply_loss_function = true;
     }
 
-    // Ceres supports three different algorithms for covariance
+    // Sparse linear algebra library to use when a sparse matrix
+    // factorization is being used to compute the covariance matrix.
+    //
+    // Currently this only applies to SPARSE_QR.
+    SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type;
+
+    // Ceres supports two different algorithms for covariance
     // estimation, which represent different tradeoffs in speed,
     // accuracy and reliability.
     //
@@ -229,22 +238,19 @@
     //    for small to moderate sized problems. It can handle
     //    full-rank as well as rank deficient Jacobians.
     //
-    // 2. EIGEN_SPARSE_QR uses the sparse QR factorization algorithm
-    //    in Eigen to compute the decomposition
+    // 2. SPARSE_QR uses the sparse QR factorization algorithm
+    //    to compute the decomposition
     //
     //      Q * R = J
     //
     //    [J'J]^-1 = [R*R']^-1
     //
-    //    It is a moderately fast algorithm for sparse matrices.
-    //
-    // 3. SUITE_SPARSE_QR uses the SuiteSparseQR sparse QR
-    //    factorization algorithm. It uses dense linear algebra and is
-    //    multi threaded, so for large sparse sparse matrices it is
-    //    significantly faster than EIGEN_SPARSE_QR.
-    //
-    // Neither EIGEN_SPARSE_QR not SUITE_SPARSE_QR are capable of
-    // computing the covariance if the Jacobian is rank deficient.
+    // SPARSE_QR is not capable of computing the covariance if the
+    // Jacobian is rank deficient. Depending on the value of
+    // Covariance::Options::sparse_linear_algebra_library_type, either
+    // Eigen's Sparse QR factorization algorithm will be used or
+    // SuiteSparse's high performance SuiteSparseQR algorithm will be
+    // used.
     CovarianceAlgorithmType algorithm_type;
 
     // If the Jacobian matrix is near singular, then inverting J'J
@@ -270,7 +276,7 @@
     //    where min_sigma and max_sigma are the minimum and maxiumum
     //    singular values of J respectively.
     //
-    // 2. SUITE_SPARSE_QR and EIGEN_SPARSE_QR
+    // 2. SPARSE_QR
     //
     //      rank(J) < num_col(J)
     //
diff --git a/include/ceres/types.h b/include/ceres/types.h
index 6b9da84..643eb23 100644
--- a/include/ceres/types.h
+++ b/include/ceres/types.h
@@ -420,8 +420,18 @@
 
 enum CovarianceAlgorithmType {
   DENSE_SVD,
-  SUITE_SPARSE_QR,
-  EIGEN_SPARSE_QR
+  SPARSE_QR,
+
+  // TODO(sameeragarwal): Expand this to include
+  // DENSE_QR (Eigen)
+  // DENSE_SCHUR_SVD (Eigen)
+  // SPARSE_SCHUR_LU (Eigen, SuiteSparse)
+  //
+  // Are the following two a good idea? given that they are not rank
+  // revealing.
+  //
+  // DENSE_SCHUR_CHOLESKY (Eigen)
+  // SPARSE_SCHUR_CHOLESKY (Eigen, SuiteSparse)
 };
 
 // It is a near impossibility that user code generates this exact
diff --git a/include/ceres/version.h b/include/ceres/version.h
index 2f1cc29..beab7d9 100644
--- a/include/ceres/version.h
+++ b/include/ceres/version.h
@@ -32,7 +32,7 @@
 #define CERES_PUBLIC_VERSION_H_
 
 #define CERES_VERSION_MAJOR 1
-#define CERES_VERSION_MINOR 12
+#define CERES_VERSION_MINOR 13
 #define CERES_VERSION_REVISION 0
 
 // Classic CPP stringifcation; the extra level of indirection allows the
diff --git a/internal/ceres/covariance_impl.cc b/internal/ceres/covariance_impl.cc
index d698f88..1f594c1 100644
--- a/internal/ceres/covariance_impl.cc
+++ b/internal/ceres/covariance_impl.cc
@@ -538,24 +538,37 @@
 }
 
 bool CovarianceImpl::ComputeCovarianceValues() {
-  switch (options_.algorithm_type) {
-    case DENSE_SVD:
-      return ComputeCovarianceValuesUsingDenseSVD();
-    case SUITE_SPARSE_QR:
-#ifndef CERES_NO_SUITESPARSE
+  if (options_.algorithm_type == DENSE_SVD) {
+    return ComputeCovarianceValuesUsingDenseSVD();
+  }
+
+  if (options_.algorithm_type == SPARSE_QR) {
+    if (options_.sparse_linear_algebra_library_type == EIGEN_SPARSE) {
+      return ComputeCovarianceValuesUsingEigenSparseQR();
+    }
+
+    if (options_.sparse_linear_algebra_library_type == SUITE_SPARSE) {
+#if !defined(CERES_NO_SUITESPARSE)
       return ComputeCovarianceValuesUsingSuiteSparseQR();
 #else
-      LOG(ERROR) << "SuiteSparse is required to use the "
-                 << "SUITE_SPARSE_QR algorithm.";
+      LOG(ERROR) << "SuiteSparse is required to use the SPARSE_QR algorithm "
+                 << "with "
+                 << "Covariance::Options::sparse_linear_algebra_library_type "
+                 << "= SUITE_SPARSE.";
       return false;
 #endif
-    case EIGEN_SPARSE_QR:
-      return ComputeCovarianceValuesUsingEigenSparseQR();
-    default:
-      LOG(ERROR) << "Unsupported covariance estimation algorithm type: "
-                 << CovarianceAlgorithmTypeToString(options_.algorithm_type);
-      return false;
+    }
+
+    LOG(ERROR) << "Unsupported "
+               << "Covariance::Options::sparse_linear_algebra_library_type "
+               << "= "
+               << SparseLinearAlgebraLibraryTypeToString(
+                      options_.sparse_linear_algebra_library_type);
+    return false;
   }
+
+  LOG(ERROR) << "Unsupported Covariance::Options::algorithm_type = "
+             << CovarianceAlgorithmTypeToString(options_.algorithm_type);
   return false;
 }
 
diff --git a/internal/ceres/covariance_test.cc b/internal/ceres/covariance_test.cc
index 63f85c4..92a7626 100644
--- a/internal/ceres/covariance_test.cc
+++ b/internal/ceres/covariance_test.cc
@@ -629,14 +629,16 @@
   Covariance::Options options;
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 }
 
@@ -677,14 +679,16 @@
   options.num_threads = 4;
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 }
 
@@ -726,14 +730,16 @@
   Covariance::Options options;
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 }
 
@@ -781,14 +787,16 @@
   Covariance::Options options;
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 }
 
@@ -839,14 +847,17 @@
   Covariance::Options options;
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 }
 
@@ -899,14 +910,17 @@
   Covariance::Options options;
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 }
 
@@ -978,14 +992,16 @@
   covariance.GetCovarianceMatrix(parameter_blocks, expected_covariance);
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 }
 
@@ -1005,14 +1021,16 @@
   covariance.GetCovarianceMatrix(parameter_blocks, expected_covariance);
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 }
 
@@ -1043,14 +1061,17 @@
                                                expected_covariance);
 
 #ifndef CERES_NO_SUITESPARSE
-  options.algorithm_type = SUITE_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 #endif
 
   options.algorithm_type = DENSE_SVD;
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 
-  options.algorithm_type = EIGEN_SPARSE_QR;
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 }
 
@@ -1197,10 +1218,14 @@
     }
   }
 
-  void ComputeAndCompare(CovarianceAlgorithmType algorithm_type,
-                         int num_threads) {
+  void ComputeAndCompare(
+      CovarianceAlgorithmType algorithm_type,
+      SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+      int num_threads) {
     Covariance::Options options;
     options.algorithm_type = algorithm_type;
+    options.sparse_linear_algebra_library_type =
+        sparse_linear_algebra_library_type;
     options.num_threads = num_threads;
     Covariance covariance(options);
     EXPECT_TRUE(covariance.Compute(all_covariance_blocks_, &problem_));
@@ -1243,7 +1268,7 @@
 #if !defined(CERES_NO_SUITESPARSE) && defined(CERES_USE_OPENMP)
 
 TEST_F(LargeScaleCovarianceTest, Parallel) {
-  ComputeAndCompare(SUITE_SPARSE_QR, 4);
+  ComputeAndCompare(SPARSE_QR, SUITE_SPARSE, 4);
 }
 
 #endif  // !defined(CERES_NO_SUITESPARSE) && defined(CERES_USE_OPENMP)
diff --git a/internal/ceres/types.cc b/internal/ceres/types.cc
index f86fb78..b833928 100644
--- a/internal/ceres/types.cc
+++ b/internal/ceres/types.cc
@@ -267,8 +267,7 @@
     CovarianceAlgorithmType type) {
   switch (type) {
     CASESTR(DENSE_SVD);
-    CASESTR(EIGEN_SPARSE_QR);
-    CASESTR(SUITE_SPARSE_QR);
+    CASESTR(SPARSE_QR);
     default:
       return "UNKNOWN";
   }
@@ -279,8 +278,7 @@
     CovarianceAlgorithmType* type) {
   UpperCase(&value);
   STRENUM(DENSE_SVD);
-  STRENUM(EIGEN_SPARSE_QR);
-  STRENUM(SUITE_SPARSE_QR);
+  STRENUM(SPARSE_QR);
   return false;
 }