commit | 08e60379bac997b50aa59a1b47292aaf47f2c303 | [log] [tgz] |
---|---|---|
author | Sameer Agarwal <sameeragarwal@google.com> | Tue Jun 13 00:12:00 2017 -0700 |
committer | Sameer Agarwal <sameeragarwal@google.com> | Wed Jun 21 23:41:36 2017 -0700 |
tree | ec458fd9a3dc220b4d706749774831fd98c191d5 | |
parent | b5b394c7388c80b7aebe91a255111c15b5013ce8 [diff] |
Integrate InnerProductComputer Despite its relative size, this is very significant change to Ceres. Why === Up till now, when the user chose SPARSE_NORMAL_CHOLESKY, the Jacobian was evaluated in a CompressedRowSparseMatrix, which was then use to compute the normal equations which were passed to a sparse linear algebra library for factorization. The reason to do this was because in the case of SuiteSparse, we were able to pass the Jacobian matrix directly without computing the normal equations and SuiteSparse/CHOLMOD did the normal equation computation. This turned out to be slow, so Cheng Wang implemented a high performance version of the matrix-matrix multiply to compute the normal equations, and all the sparse linear algebra libraries now are passed the normal equations. So that raises the question, as to what the best representation of the Jacobian which is suitable for the normal equation computation. Turns out BlockSparseMatrix is ideal. It brings two advantages. 1. Jacobian evaluation into a BlockSparseMatrix is considerably faster when using a BlockSparseMatrix than CompressedRowSparseMatrix. This is because we save on a bunch of memory copies. 2. To make the matrix multiplication fast and use the block structure Cheng Wang had to essentially make the CompressedRowSparseMatrix carry a bunch of sidecar information about the block sparsity, essentially making it behave like a BlockSparseMatrix. The resulting code had fairly complicated indexing and complicated the semantics of CompressedRowSparseMatrix. The new InnerProductComputer class does away with all that and once this CL goes in, I will be able to remove all that code and simplify the semantics of CompressedRowSparseMatrix. Changes ======= 1. Use InnerProductComputer in SparseNormalCholeskySolver. 2. Change the evaluator instantiated for SPARSE_NORMAL_CHOLESKY with static sparsity inside evaluator.cc 3. The former change necessitates that we change ProblemImpl::Evaluate to create the evaluate it needs on its own, because it was depending on passing "SPARSE_NORMAL_CHOLESKY" as linear solver type to the evaluator factor to get an Evaluator which can use CompressedRowSparseMatrix objects for storing the Jacobian. 4. Update the tests for SparseNormalCholeskySolver. 5. Separate out the tests for DynamicSparseNormalCholeskySolver into its own file. Change-Id: I2ef7ef8fbfbb4967d0c1ec2068c1c778248fdf5b
Ceres Solver is an open source C++ library for modeling and solving large, complicated optimization problems. It is a feature rich, mature and performant library which has been used in production at Google since 2010. Ceres Solver can solve two kinds of problems.
Please see ceres-solver.org for more information.
Ceres development happens on Gerrit, including both repository hosting and code reviews. The GitHub Repository is a continuously updated mirror which is primarily meant for issue tracking. Please see our Contributing to Ceres Guide for more details.
The upstream Gerrit repository is
https://ceres-solver.googlesource.com/ceres-solver