Use more performant, less conservative Eigen solvers.
colPivHouseholderQR -> householderQR
ldlt -> llt.
The resulting performance differences are significant enough
to justify switching.
LAPACK's dgels routine used for solving linear least squares
problems does not use pivoting either.
Similarly, we are not actually using the fact that the matrix
being factorized can be indefinite when using LDLT factorization, so
its not clear that the performance hit is worth it.
These two changes result in Eigen being able to use blocking
algorithms, which for Cholesky factorization, brings the performance
closer to hardware optimized LAPACK. Similarly for dense QR
factorization, on intel there is a 2x speedup.
Change-Id: I4459ee0fc8eb87d58e2b299dfaa9e656d539dc5e
diff --git a/internal/ceres/implicit_schur_complement_test.cc b/internal/ceres/implicit_schur_complement_test.cc
index bd36672..1694273 100644
--- a/internal/ceres/implicit_schur_complement_test.cc
+++ b/internal/ceres/implicit_schur_complement_test.cc
@@ -109,7 +109,7 @@
solution->setZero();
VectorRef schur_solution(solution->data() + num_cols_ - num_schur_rows,
num_schur_rows);
- schur_solution = lhs->selfadjointView<Eigen::Upper>().ldlt().solve(*rhs);
+ schur_solution = lhs->selfadjointView<Eigen::Upper>().llt().solve(*rhs);
eliminator->BackSubstitute(A_.get(), b_.get(), D,
schur_solution.data(), solution->data());
}
@@ -156,7 +156,7 @@
// Reference solution to the f_block.
const Vector reference_f_sol =
- lhs.selfadjointView<Eigen::Upper>().ldlt().solve(rhs);
+ lhs.selfadjointView<Eigen::Upper>().llt().solve(rhs);
// Backsubstituted solution from the implicit schur solver using the
// reference solution to the f_block.