Add dynamic_sparsity option.

The standard sparse normal Cholesky solver assumes a fixed
sparsity pattern which is useful for a large number of problems
presented to Ceres. However, some problems are symbolically dense
but numerically sparse i.e. each residual is a function of a
large number of parameters but at any given state the residual
only depends on a sparse subset of them. For these class of
problems it is faster to re-analyse the sparsity pattern of the
jacobian at each iteration of the non-linear optimisation instead
of including all of the zero entries in the step computation.

The proposed solution adds the dynamic_sparsity option which can
be used with SPARSE_NORMAL_CHOLESKY. A
DynamicCompressedRowSparseMatrix type (which extends
CompressedRowSparseMatrix) has been introduced which allows
dynamic addition and removal of elements. A Finalize method is
provided which then consolidates the matrix so that it can be
used in place of a regular CompressedRowSparseMatrix. An
associated jacobian writer has also been provided.

Changes that were required to make this extension were adding the
SetMaxNumNonZeros method to CompressedRowSparseMatrix and adding
a JacobianFinalizer template parameter to the ProgramEvaluator.

Change-Id: Ia5a8a9523fdae8d5b027bc35e70b4611ec2a8d01
diff --git a/include/ceres/solver.h b/include/ceres/solver.h
index c5bfadc..3d63951 100644
--- a/include/ceres/solver.h
+++ b/include/ceres/solver.h
@@ -108,6 +108,7 @@
 
       num_linear_solver_threads = 1;
       use_postordering = false;
+      dynamic_sparsity = false;
       min_linear_solver_iterations = 1;
       max_linear_solver_iterations = 500;
       eta = 1e-1;
@@ -500,6 +501,21 @@
     // matrix. Setting use_postordering to true enables this tradeoff.
     bool use_postordering;
 
+    // Some non-linear least squares problems are symbolically dense but
+    // numerically sparse. i.e. at any given state only a small number
+    // of jacobian entries are non-zero, but the position and number of
+    // non-zeros is different depending on the state. For these problems
+    // it can be useful to factorize the sparse jacobian at each solver
+    // iteration instead of including all of the zero entries in a single
+    // general factorization.
+    //
+    // If your problem does not have this property (or you do not know),
+    // then it is probably best to keep this false, otherwise it will
+    // likely lead to worse performance.
+
+    // This settings affects the SPARSE_NORMAL_CHOLESKY solver.
+    bool dynamic_sparsity;
+
     // Some non-linear least squares problems have additional
     // structure in the way the parameter blocks interact that it is
     // beneficial to modify the way the trust region step is computed.