LocalParameterization -> Manifold #1

Manifolds are now part of the public API and co-exist
with LocalParameterizations.

1. Add Manifolds to the Problem API.
   a. AddParameterBlock(double*, int, Manifold*)
   b. SetParameterization(double*, Manifold*)
   b. GetManifold(const double*)
   c. HasManifold(const double*)

2. Internally Ceres now only uses Manifolds. When the user uses
   a LocalParameterization, it is wrapped in a ManifoldAdapter.

3. To preserve the API semantics while keeping the internals clean
   we need a new map in ProblemImpl which stores the association
   between parameter blocks and local parameterizations. This
   is temporary, it will go away once this transition is complete.

4. There are NO algorithmic changes, as in we are not using
   any of the expanded interface of the Manifold objects yet.
   That will come later.

5. All tests that use LocalParameterization have been duplicated
   to use Manifolds, and when this transition is complete the
   LocalParameterization based tests will be deleted.

6. Public documentation for the API has been updated. Deprecation
   notices to the documentation as well as C++ annotations will come
   later.

7. Similar changes have been made to GradientProblem.

Change-Id: I8e03c8ced6e141876ef3eca5740c113afa788f0c
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 4851be5..68d99c9 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -37,7 +37,7 @@
   cmake_policy(SET CMP0074 NEW)
 endif()
 
-# Set the C++ version (must be >= C++14) when compiling Ceres.
+# Set the C++ version when compiling Ceres.
 #
 # Reflect a user-specified (via -D) CMAKE_CXX_STANDARD if present, otherwise
 # default to C++14.
diff --git a/include/ceres/gradient_checker.h b/include/ceres/gradient_checker.h
index b79cf86..f2ea0fc 100644
--- a/include/ceres/gradient_checker.h
+++ b/include/ceres/gradient_checker.h
@@ -43,6 +43,7 @@
 #include "ceres/internal/eigen.h"
 #include "ceres/internal/fixed_array.h"
 #include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 #include "glog/logging.h"
 
 namespace ceres {
@@ -65,19 +66,39 @@
 // CostFunction, and then call Probe(). Check that the return value is 'true'.
 class CERES_EXPORT GradientChecker {
  public:
-  // This will not take ownership of the cost function or local
+  // This constructor will not take ownership of the cost function or local
   // parameterizations.
   //
   // function: The cost function to probe.
-  // local_parameterizations: A vector of local parameterizations for each
-  // parameter. May be NULL or contain NULL pointers to indicate that the
+  //
+  // local_parameterizations: A vector of local parameterizations, one for each
+  // parameter block. May be nullptr or contain nullptrs to indicate that the
   // respective parameter does not have a local parameterization.
+  //
   // options: Options to use for numerical differentiation.
+  //
+  // NOTE: This constructor is deprecated and will be removed in the next public
+  // release of Ceres Solver. Please transition to using the Manifold based
+  // version.
   GradientChecker(
       const CostFunction* function,
       const std::vector<const LocalParameterization*>* local_parameterizations,
       const NumericDiffOptions& options);
 
+  // This will not take ownership of the cost function or manifolds.
+  //
+  // function: The cost function to probe.
+  //
+  // manifolds: A vector of manifolds for each parameter. May be nullptr or
+  // contain nullptrs to indicate that the respective parameter blocks are
+  // Euclidean.
+  //
+  // options: Options to use for numerical differentiation.
+  GradientChecker(const CostFunction* function,
+                  const std::vector<const Manifold*>* manifolds,
+                  const NumericDiffOptions& options);
+  ~GradientChecker();
+
   // Contains results from a call to Probe for later inspection.
   struct CERES_EXPORT ProbeResults {
     // The return value of the cost function.
@@ -87,11 +108,11 @@
     Vector residuals;
 
     // The sizes of the Jacobians below are dictated by the cost function's
-    // parameter block size and residual block sizes. If a parameter block
-    // has a local parameterization associated with it, the size of the "local"
-    // Jacobian will be determined by the local parameterization dimension and
-    // residual block size, otherwise it will be identical to the regular
-    // Jacobian.
+    // parameter block size and residual block sizes. If a parameter block has a
+    // manifold associated with it, the size of the "local" Jacobian will be
+    // determined by the dimension of the manifold (which is the same as the
+    // dimension of the tangent space) and residual block size, otherwise it
+    // will be identical to the regular Jacobian.
 
     // Derivatives as computed by the cost function.
     std::vector<Matrix> jacobians;
@@ -114,13 +135,13 @@
   };
 
   // Call the cost function, compute alternative Jacobians using finite
-  // differencing and compare results. If local parameterizations are given,
-  // the Jacobians will be multiplied by the local parameterization Jacobians
-  // before performing the check, which effectively means that all errors along
-  // the null space of the local parameterization will be ignored.
-  // Returns false if the Jacobians don't match, the cost function return false,
-  // or if the cost function returns different residual when called with a
-  // Jacobian output argument vs. calling it without. Otherwise returns true.
+  // differencing and compare results. If manifolds are given, the Jacobians
+  // will be multiplied by the manifold Jacobians before performing the check,
+  // which effectively means that all errors along the null space of the
+  // manifold will be ignored.  Returns false if the Jacobians don't match, the
+  // cost function return false, or if a cost function returns a different
+  // residual when called with a Jacobian output argument vs. calling it
+  // without. Otherwise returns true.
   //
   // parameters: The parameter values at which to probe.
   // relative_precision: A threshold for the relative difference between the
@@ -140,7 +161,18 @@
   GradientChecker(const GradientChecker&) = delete;
   void operator=(const GradientChecker&) = delete;
 
-  std::vector<const LocalParameterization*> local_parameterizations_;
+  // This bool is used to determine whether the constructor with the
+  // LocalParameterizations is called or the one with Manifolds is called. If
+  // the former, then the vector of manifolds is a vector of ManifoldAdapter
+  // objects which we own and should be deleted. If the latter then they are
+  // real Manifold objects owned by the caller and will not be deleted.
+  //
+  // This bool is only needed during the LocalParameterization to Manifold
+  // transition, once this transition is complete the LocalParameterization
+  // based constructor and this bool will be removed.
+  const bool delete_manifolds_ = false;
+
+  std::vector<const Manifold*> manifolds_;
   const CostFunction* function_;
   std::unique_ptr<CostFunction> finite_diff_cost_function_;
 };
diff --git a/include/ceres/gradient_problem.h b/include/ceres/gradient_problem.h
index 7caeef1..d9d40e2 100644
--- a/include/ceres/gradient_problem.h
+++ b/include/ceres/gradient_problem.h
@@ -36,6 +36,7 @@
 #include "ceres/first_order_function.h"
 #include "ceres/internal/port.h"
 #include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 
 namespace ceres {
 
@@ -43,23 +44,22 @@
 
 // Instances of GradientProblem represent general non-linear
 // optimization problems that must be solved using just the value of
-// the objective function and its gradient. Unlike the Problem class,
-// which can only be used to model non-linear least squares problems,
-// instances of GradientProblem not restricted in the form of the
-// objective function.
+// the objective function and its gradient.
+
+// Unlike the Problem class, which can only be used to model non-linear least
+// squares problems, instances of GradientProblem are not restricted in the form
+// of the objective function.
 //
-// Structurally GradientProblem is a composition of a
-// FirstOrderFunction and optionally a LocalParameterization.
+// Structurally GradientProblem is a composition of a FirstOrderFunction and
+// optionally a Manifold.
 //
-// The FirstOrderFunction is responsible for evaluating the cost and
-// gradient of the objective function.
+// The FirstOrderFunction is responsible for evaluating the cost and gradient of
+// the objective function.
 //
-// The LocalParameterization is responsible for going back and forth
-// between the ambient space and the local tangent space. (See
-// local_parameterization.h for more details). When a
-// LocalParameterization is not provided, then the tangent space is
-// assumed to coincide with the ambient Euclidean space that the
-// gradient vector lives in.
+// The Manifold is responsible for going back and forth between the ambient
+// space and the local tangent space. (See manifold.h for more details). When a
+// Manifold is not provided, then the tangent space is assumed to coincide with
+// the ambient Euclidean space that the gradient vector lives in.
 //
 // Example usage:
 //
@@ -89,17 +89,43 @@
 // };
 //
 // ceres::GradientProblem problem(new Rosenbrock());
+//
+// NOTE: We are currently in the process of transitioning from
+// LocalParameterization to Manifolds in the Ceres API. During this period,
+// GradientProblem will support using both Manifold and LocalParameterization
+// objects interchangably. For methods in the API affected by this change, see
+// their documentation below.
 class CERES_EXPORT GradientProblem {
  public:
   // Takes ownership of the function.
   explicit GradientProblem(FirstOrderFunction* function);
 
   // Takes ownership of the function and the parameterization.
+  //
+  // NOTE: This constructor is deprecated and will be removed in the next public
+  // release of Ceres Solver. Please move to using the Manifold based
+  // constructor.
   GradientProblem(FirstOrderFunction* function,
                   LocalParameterization* parameterization);
 
+  // Takes ownership of the function and the manifold.
+  GradientProblem(FirstOrderFunction* function, Manifold* manifold);
+
   int NumParameters() const;
-  int NumLocalParameters() const;
+
+  // Dimension of the manifold (and its tangent space).
+  //
+  // During the transition from LocalParameterization to Manifold, this method
+  // reports the LocalSize of the LocalParameterization or the TangentSize of
+  // the Manifold object associated with this problem.
+  int NumTangentParameters() const;
+
+  // Dimension of the manifold (and its tangent space).
+  //
+  // NOTE: This method is deprecated and will be removed in the next public
+  // release of Ceres Solver. Please move to using NumTangentParameters()
+  // instead.
+  int NumLocalParameters() const { return NumTangentParameters(); }
 
   // This call is not thread safe.
   bool Evaluate(const double* parameters, double* cost, double* gradient) const;
@@ -107,9 +133,33 @@
 
   const FirstOrderFunction* function() const { return function_.get(); }
   FirstOrderFunction* mutable_function() { return function_.get(); }
+
+  // NOTE: During the transition from LocalParameterization to Manifold we need
+  // to support both The LocalParameterization and Manifold based constructors.
+  //
+  // When the user uses the LocalParameterization, internally the solver will
+  // wrap it in a ManifoldAdapter object and return it when manifold or
+  // mutable_manifold are called.
+  //
+  // As a result this method will return a non-nullptr result if a Manifold or a
+  // LocalParameterization was used when constructing the GradientProblem.
+  const Manifold* manifold() const { return manifold_.get(); }
+  Manifold* mutable_manifold() { return manifold_.get(); }
+
+  // If the problem is constructed without a LocalParameterization or with a
+  // Manifold this method will return a nullptr.
+  //
+  // NOTE: This method is deprecated and will be removed in the next public
+  // release of Ceres Solver.
   const LocalParameterization* parameterization() const {
     return parameterization_.get();
   }
+
+  // If the problem is constructed without a LocalParameterization or with a
+  // Manifold this method will return a nullptr.
+  //
+  // NOTE: This method is deprecated and will be removed in the next public
+  // release of Ceres Solver.
   LocalParameterization* mutable_parameterization() {
     return parameterization_.get();
   }
@@ -117,6 +167,7 @@
  private:
   std::unique_ptr<FirstOrderFunction> function_;
   std::unique_ptr<LocalParameterization> parameterization_;
+  std::unique_ptr<Manifold> manifold_;
   std::unique_ptr<double[]> scratch_;
 };
 
diff --git a/include/ceres/problem.h b/include/ceres/problem.h
index 734a968..4436e51 100644
--- a/include/ceres/problem.h
+++ b/include/ceres/problem.h
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -53,6 +53,7 @@
 class EvaluationCallback;
 class LossFunction;
 class LocalParameterization;
+class Manifold;
 class Solver;
 struct CRSMatrix;
 
@@ -78,31 +79,28 @@
 //
 // where
 //
-//   r_ij     is residual number i, component j; the residual is a
-//            function of some subset of the parameters x1...xk. For
-//            example, in a structure from motion problem a residual
-//            might be the difference between a measured point in an
-//            image and the reprojected position for the matching
-//            camera, point pair. The residual would have two
-//            components, error in x and error in y.
+//   r_ij is residual number i, component j; the residual is a function of some
+//        subset of the parameters x1...xk. For example, in a structure from
+//        motion problem a residual might be the difference between a measured
+//        point in an image and the reprojected position for the matching
+//        camera, point pair. The residual would have two components, error in x
+//        and error in y.
 //
-//   loss(y)  is the loss function; for example, squared error or
-//            Huber L1 loss. If loss(y) = y, then the cost function is
-//            non-robustified least squares.
+//   loss(y) is the loss function; for example, squared error or Huber L1
+//           loss. If loss(y) = y, then the cost function is non-robustified
+//           least squares.
 //
-// This class is specifically designed to address the important subset
-// of "sparse" least squares problems, where each component of the
-// residual depends only on a small number number of parameters, even
-// though the total number of residuals and parameters may be very
-// large. This property affords tremendous gains in scale, allowing
-// efficient solving of large problems that are otherwise
-// inaccessible.
+// This class is specifically designed to address the important subset of
+// "sparse" least squares problems, where each component of the residual depends
+// only on a small number number of parameters, even though the total number of
+// residuals and parameters may be very large. This property affords tremendous
+// gains in scale, allowing efficient solving of large problems that are
+// otherwise inaccessible.
 //
 // The canonical example of a sparse least squares problem is
-// "structure-from-motion" (SFM), where the parameters are points and
-// cameras, and residuals are reprojection errors. Typically a single
-// residual will depend only on 9 parameters (3 for the point, 6 for
-// the camera).
+// "structure-from-motion" (SFM), where the parameters are points and cameras,
+// and residuals are reprojection errors. Typically a single residual will
+// depend only on 9 parameters (3 for the point, 6 for the camera).
 //
 // To create a least squares problem, use the AddResidualBlock() and
 // AddParameterBlock() methods, documented below. Here is an example least
@@ -119,41 +117,49 @@
 //   problem.AddResidualBlock(new MyBinaryCostFunction(...), nullptr, x2, x3);
 //
 // Please see cost_function.h for details of the CostFunction object.
+//
+// NOTE: We are currently in the process of transitioning from
+// LocalParameterization to Manifolds in the Ceres API. During this period,
+// Problem will support using both Manifold and LocalParameterization objects
+// interchangably. In particular, adding a LocalParameterization to a parameter
+// block is the same as adding a Manifold to that parameter block. For methods
+// in the API affected by this change, see their documentation below.
 class CERES_EXPORT Problem {
  public:
   struct CERES_EXPORT Options {
-    // These flags control whether the Problem object owns the cost
-    // functions, loss functions, and parameterizations passed into
-    // the Problem. If set to TAKE_OWNERSHIP, then the problem object
-    // will delete the corresponding cost or loss functions on
-    // destruction. The destructor is careful to delete the pointers
-    // only once, since sharing cost/loss/parameterizations is
-    // allowed.
+    // These flags control whether the Problem object owns the CostFunctions,
+    // LossFunctions, LocalParameterizations, and Manifolds passed into the
+    // Problem.
+    //
+    // If set to TAKE_OWNERSHIP, then the problem object will delete the
+    // corresponding object on destruction. The destructor is careful to delete
+    // the pointers only once, since sharing objects is allowed.
     Ownership cost_function_ownership = TAKE_OWNERSHIP;
     Ownership loss_function_ownership = TAKE_OWNERSHIP;
     Ownership local_parameterization_ownership = TAKE_OWNERSHIP;
+    Ownership manifold_ownership = TAKE_OWNERSHIP;
 
     // If true, trades memory for faster RemoveResidualBlock() and
     // RemoveParameterBlock() operations.
     //
     // By default, RemoveParameterBlock() and RemoveResidualBlock() take time
-    // proportional to the size of the entire problem.  If you only ever remove
+    // proportional to the size of the entire problem. If you only ever remove
     // parameters or residuals from the problem occasionally, this might be
-    // acceptable.  However, if you have memory to spare, enable this option to
+    // acceptable. However, if you have memory to spare, enable this option to
     // make RemoveParameterBlock() take time proportional to the number of
     // residual blocks that depend on it, and RemoveResidualBlock() take (on
     // average) constant time.
     //
-    // The increase in memory usage is twofold: an additional hash set per
+    // The increase in memory usage is two-fold: an additional hash set per
     // parameter block containing all the residuals that depend on the parameter
     // block; and a hash set in the problem containing all residuals.
     bool enable_fast_removal = false;
 
     // By default, Ceres performs a variety of safety checks when constructing
-    // the problem. There is a small but measurable performance penalty to
-    // these checks, typically around 5% of construction time. If you are sure
-    // your problem construction is correct, and 5% of the problem construction
-    // time is truly an overhead you want to avoid, then you can set
+    // the problem. There is a small but measurable performance penalty to these
+    // checks, typically around 5% of construction time. If you are sure your
+    // problem construction is correct, and 5% of the problem construction time
+    // is truly an overhead you want to avoid, then you can set
     // disable_all_safety_checks to true.
     //
     // WARNING: Do not set this to true, unless you are absolutely sure of what
@@ -167,26 +173,23 @@
     // Ceres does NOT take ownership of the pointer.
     Context* context = nullptr;
 
-    // Using this callback interface, Ceres can notify you when it is
-    // about to evaluate the residuals or jacobians. With the
-    // callback, you can share computation between residual blocks by
-    // doing the shared computation in
+    // Using this callback interface, Ceres can notify you when it is about to
+    // evaluate the residuals or jacobians. With the callback, you can share
+    // computation between residual blocks by doing the shared computation in
     // EvaluationCallback::PrepareForEvaluation() before Ceres calls
-    // CostFunction::Evaluate(). It also enables caching results
-    // between a pure residual evaluation and a residual & jacobian
-    // evaluation.
+    // CostFunction::Evaluate(). It also enables caching results between a pure
+    // residual evaluation and a residual & jacobian evaluation.
     //
     // Problem DOES NOT take ownership of the callback.
     //
-    // NOTE: Evaluation callbacks are incompatible with inner
-    // iterations. So calling Solve with
-    // Solver::Options::use_inner_iterations = true on a Problem with
-    // a non-null evaluation callback is an error.
+    // NOTE: Evaluation callbacks are incompatible with inner iterations. So
+    // calling Solve with Solver::Options::use_inner_iterations = true on a
+    // Problem with a non-null evaluation callback is an error.
     EvaluationCallback* evaluation_callback = nullptr;
   };
 
-  // The default constructor is equivalent to the
-  // invocation Problem(Problem::Options()).
+  // The default constructor is equivalent to the invocation
+  // Problem(Problem::Options()).
   Problem();
   explicit Problem(const Options& options);
   Problem(Problem&&);
@@ -197,31 +200,29 @@
 
   ~Problem();
 
-  // Add a residual block to the overall cost function. The cost
-  // function carries with its information about the sizes of the
-  // parameter blocks it expects. The function checks that these match
-  // the sizes of the parameter blocks listed in parameter_blocks. The
-  // program aborts if a mismatch is detected. loss_function can be
-  // nullptr, in which case the cost of the term is just the squared norm
-  // of the residuals.
+  // Add a residual block to the overall cost function. The cost function
+  // carries with its information about the sizes of the parameter blocks it
+  // expects. The function checks that these match the sizes of the parameter
+  // blocks listed in parameter_blocks. The program aborts if a mismatch is
+  // detected. loss_function can be nullptr, in which case the cost of the term
+  // is just the squared norm of the residuals.
   //
-  // The user has the option of explicitly adding the parameter blocks
-  // using AddParameterBlock. This causes additional correctness
-  // checking; however, AddResidualBlock implicitly adds the parameter
-  // blocks if they are not present, so calling AddParameterBlock
-  // explicitly is not required.
+  // The user has the option of explicitly adding the parameter blocks using
+  // AddParameterBlock. This causes additional correctness checking; however,
+  // AddResidualBlock implicitly adds the parameter blocks if they are not
+  // present, so calling AddParameterBlock explicitly is not required.
   //
-  // The Problem object by default takes ownership of the
-  // cost_function and loss_function pointers. These objects remain
-  // live for the life of the Problem object. If the user wishes to
-  // keep control over the destruction of these objects, then they can
+  // The Problem object by default takes ownership of the cost_function and
+  // loss_function pointers (See Problem::Options to override this behaviour).
+  // These objects remain live for the life of the Problem object. If the user
+  // wishes to keep control over the destruction of these objects, then they can
   // do this by setting the corresponding enums in the Options struct.
   //
-  // Note: Even though the Problem takes ownership of cost_function
-  // and loss_function, it does not preclude the user from re-using
-  // them in another residual block. The destructor takes care to call
-  // delete on each cost_function or loss_function pointer only once,
-  // regardless of how many residual blocks refer to them.
+  // Note: Even though the Problem takes ownership of cost_function and
+  // loss_function, it does not preclude the user from re-using them in another
+  // residual block. The destructor takes care to call delete on each
+  // cost_function or loss_function pointer only once, regardless of how many
+  // residual blocks refer to them.
   //
   // Example usage:
   //
@@ -234,8 +235,8 @@
   //   problem.AddResidualBlock(new MyUnaryCostFunction(...), nullptr, x1);
   //   problem.AddResidualBlock(new MyBinaryCostFunction(...), nullptr, x2, x1);
   //
-  // Add a residual block by listing the parameter block pointers
-  // directly instead of wapping them in a container.
+  // Add a residual block by listing the parameter block pointers directly
+  // instead of wapping them in a container.
   template <typename... Ts>
   ResidualBlockId AddResidualBlock(CostFunction* cost_function,
                                    LossFunction* loss_function,
@@ -261,29 +262,72 @@
                                    double* const* const parameter_blocks,
                                    int num_parameter_blocks);
 
-  // Add a parameter block with appropriate size to the problem.
-  // Repeated calls with the same arguments are ignored. Repeated
-  // calls with the same double pointer but a different size results
-  // in undefined behaviour.
+  // Add a parameter block with appropriate size to the problem. Repeated calls
+  // with the same arguments are ignored. Repeated calls with the same double
+  // pointer but a different size will result in a crash.
   void AddParameterBlock(double* values, int size);
 
-  // Add a parameter block with appropriate size and parameterization
-  // to the problem. Repeated calls with the same arguments are
-  // ignored. Repeated calls with the same double pointer but a
-  // different size results in undefined behaviour.
+  // Add a parameter block with appropriate size and parameterization to the
+  // problem. It is okay for local_parameterization to be nullptr.
+  //
+  // Repeated calls with the same arguments are ignored. Repeated calls
+  // with the same double pointer but a different size results in a crash
+  // (unless Solver::Options::diable_all_safety_checks is set to true).
+  //
+  // Repeated calls with the same double pointer and size but different
+  // LocalParameterization is equivalent to calling
+  // SetParameterization(local_parameterization), i.e., any previously
+  // associated LocalParameterization or Manifold object will be replaced with
+  // the local_parameterization.
+  //
+  // NOTE:
+  // ----
+  //
+  // This method is deprecated and will be removed in the next public
+  // release of Ceres Solver. Please move to using the Manifold based version of
+  // AddParameterBlock.
+  //
+  // During the transition from LocalParameterization to Manifold, internally
+  // the LocalParameterization is treated as a Manifold by wrapping it using a
+  // ManifoldAdapter object. So HasManifold() will return true, GetManifold()
+  // will return the wrapped object and ParameterBlockTangentSize() will return
+  // the LocalSize of the LocalParameterization.
   void AddParameterBlock(double* values,
                          int size,
                          LocalParameterization* local_parameterization);
 
-  // Remove a parameter block from the problem. The parameterization of the
-  // parameter block, if it exists, will persist until the deletion of the
-  // problem (similar to cost/loss functions in residual block removal). Any
-  // residual blocks that depend on the parameter are also removed, as
-  // described above in RemoveResidualBlock().
+  // Add a parameter block with appropriate size and Manifold to the
+  // problem. It is okay for manifold to be nullptr.
   //
-  // If Problem::Options::enable_fast_removal is true, then the
-  // removal is fast (almost constant time). Otherwise, removing a parameter
-  // block will incur a scan of the entire Problem object.
+  // Repeated calls with the same arguments are ignored. Repeated calls
+  // with the same double pointer but a different size results in a crash
+  // (unless Solver::Options::diable_all_safety_checks is set to true).
+  //
+  // Repeated calls with the same double pointer and size but different Manifold
+  // is equivalent to calling SetManifold(manifold), i.e., any previously
+  // associated LocalParameterization or Manifold object will be replaced with
+  // the manifold.
+  //
+  // Note:
+  // ----
+  //
+  // During the transition from LocalParameterization to Manifold, calling
+  // AddParameterBlock with a Manifold when a LocalParameterization is already
+  // associated with the parameter block is okay. It is equivalent to calling
+  // SetManifold(manifold), i.e., any previously associated
+  // LocalParameterization or Manifold object will be replaced with the
+  // manifold.
+  void AddParameterBlock(double* values, int size, Manifold* manifold);
+
+  // Remove a parameter block from the problem. The LocalParameterization or
+  // Manifold of the parameter block, if it exists, will persist until the
+  // deletion of the problem (similar to cost/loss functions in residual block
+  // removal). Any residual blocks that depend on the parameter are also
+  // removed, as described above in RemoveResidualBlock().
+  //
+  // If Problem::Options::enable_fast_removal is true, then the removal is fast
+  // (almost constant time). Otherwise, removing a parameter block will incur a
+  // scan of the entire Problem object.
   //
   // WARNING: Removing a residual or parameter block will destroy the implicit
   // ordering, rendering the jacobian or residuals returned from the solver
@@ -308,39 +352,99 @@
   // Allow the indicated parameter block to vary during optimization.
   void SetParameterBlockVariable(double* values);
 
-  // Returns true if a parameter block is set constant, and false
-  // otherwise. A parameter block may be set constant in two ways:
-  // either by calling SetParameterBlockConstant or by associating a
-  // LocalParameterization with a zero dimensional tangent space with
-  // it.
+  // Returns true if a parameter block is set constant, and false otherwise. A
+  // parameter block may be set constant in two ways: either by calling
+  // SetParameterBlockConstant or by associating a LocalParameterization or
+  // Manifold with a zero dimensional tangent space with it.
   bool IsParameterBlockConstant(const double* values) const;
 
-  // Set the local parameterization for one of the parameter blocks.
-  // The local_parameterization is owned by the Problem by default. It
-  // is acceptable to set the same parameterization for multiple
-  // parameters; the destructor is careful to delete local
-  // parameterizations only once. Calling SetParameterization with
-  // nullptr will clear any previously set parameterization.
+  // Set the LocalParameterization for the parameter block. Calling
+  // SetParameterization with nullptr will clear any previously set
+  // LocalParameterization or Manifold for the parameter block.
+  //
+  // Repeated calls will cause any previously associated LocalParameterization
+  // or Manifold object to be replaced with the local_parameterization.
+  //
+  // The local_parameterization is owned by the Problem by default (See
+  // Problem::Options to override this behaviour).
+  //
+  // It is acceptable to set the same LocalParameterization for multiple
+  // parameter blocks; the destructor is careful to delete
+  // LocalParamaterizations only once.
+  //
+  // NOTE:
+  // ----
+  //
+  // This method is deprecated and will be removed in the next public
+  // release of Ceres Solver. Please move to using the SetManifold instead.
+  //
+  // During the transition from LocalParameterization to Manifold, internally
+  // the LocalParameterization is treated as a Manifold by wrapping it using a
+  // ManifoldAdapter object. So HasManifold() will return true, GetManifold()
+  // will return the wrapped object and ParameterBlockTangent.
   void SetParameterization(double* values,
                            LocalParameterization* local_parameterization);
 
-  // Get the local parameterization object associated with this
-  // parameter block. If there is no parameterization object
-  // associated then nullptr is returned.
+  // Get the LocalParameterization object associated with this parameter block.
+  // If there is no LocalParameterization associated then nullptr is returned.
+  //
+  // NOTE: This method is deprecated and will be removed in the next public
+  // release of Ceres Solver. Use GetManifold instead.
+  //
+  // Note also that if a LocalParameterization is associated with a parameter
+  // block, HasManifold will return true and GetManifold will return the
+  // LocalParameterization wrapped in a ManifoldAdapeter.
+  //
+  // The converse is not true, i.e., if a Manifold is associated with a
+  // parameter block, HasParameterization will return false and
+  // GetParameterization will return a nullptr.
   const LocalParameterization* GetParameterization(const double* values) const;
 
-  // Returns true if a parameterization is associated with this parameter block,
-  // false otherwise.
+  // Returns true if a LocalParameterization or a Manifold is associated with
+  // this parameter block, false otherwise.
+  //
+  // NOTE: This method is deprecated and will be removed in the next public
+  // release of Ceres Solver. Use HasManifold instead.
   bool HasParameterization(const double* values) const;
 
+  // Set the Manifold for the parameter block. Calling SetManifold with nullptr
+  // will clear any previously set LocalParameterization or Manifold for the
+  // parameter block.
+  //
+  // Repeated calls will result in any previously associated
+  // LocalParameterization or Manifold object to be replaced with the manifold.
+  //
+  // The manifold is owned by the Problem by default (See Problem::Options to
+  // override this behaviour).
+  //
+  // It is acceptable to set the same Manifold for multiple parameter blocks;
+  // the destructor is careful to delete Manifolds only once.
+  void SetManifold(double* values, Manifold* manifold);
+
+  // Get the manifold object associated with this parameter block.
+  //
+  // If there is no Manifold Or LocalParameterization object associated then
+  // nullptr is returned.
+  //
+  // NOTE: During the transition from LocalParameterization to Manifold,
+  // internally the LocalParameterization is treated as a Manifold by wrapping
+  // it using a ManifoldAdapter object. So calling GetManifold on a parameter
+  // block with a LocalParameterization associated with it will return the
+  // LocalParameterization wrapped in a Manifold adapter.
+  const Manifold* GetManifold(const double* values) const;
+
+  // Returns true if a Manifold or a LocalParameterization is associated with
+  // this parameter block, false otherwise.
+  bool HasManifold(const double* values) const;
+
   // Set the lower/upper bound for the parameter at position "index".
   void SetParameterLowerBound(double* values, int index, double lower_bound);
   void SetParameterUpperBound(double* values, int index, double upper_bound);
 
-  // Get the lower/upper bound for the parameter at position
-  // "index". If the parameter is not bounded by the user, then its
-  // lower bound is -std::numeric_limits<double>::max() and upper
-  // bound is std::numeric_limits<double>::max().
+  // Get the lower/upper bound for the parameter at position "index". If the
+  // parameter is not bounded by the user, then its lower bound is
+  // -std::numeric_limits<double>::max() and upper bound is
+  // std::numeric_limits<double>::max().
   double GetParameterLowerBound(const double* values, int index) const;
   double GetParameterUpperBound(const double* values, int index) const;
 
@@ -348,37 +452,44 @@
   // parameter_blocks().size() and parameter_block_sizes().size().
   int NumParameterBlocks() const;
 
-  // The size of the parameter vector obtained by summing over the
-  // sizes of all the parameter blocks.
+  // The size of the parameter vector obtained by summing over the sizes of all
+  // the parameter blocks.
   int NumParameters() const;
 
   // Number of residual blocks in the problem. Always equals
   // residual_blocks().size().
   int NumResidualBlocks() const;
 
-  // The size of the residual vector obtained by summing over the
-  // sizes of all of the residual blocks.
+  // The size of the residual vector obtained by summing over the sizes of all
+  // of the residual blocks.
   int NumResiduals() const;
 
   // The size of the parameter block.
   int ParameterBlockSize(const double* values) const;
 
-  // The size of local parameterization for the parameter block. If
-  // there is no local parameterization associated with this parameter
-  // block, then ParameterBlockLocalSize = ParameterBlockSize.
+  // The dimension of the tangent space of the LocalParameterization or Manifold
+  // for the parameter block. If there is no LocalParameterization or Manifold
+  // associated with this parameter block, then ParameterBlockLocalSize =
+  // ParameterBlockSize.
   int ParameterBlockLocalSize(const double* values) const;
 
+  // The dimenion of the tangent space of the LocalParameterization or Manifold
+  // for the parameter block. If there is no LocalParameterization or Manifold
+  // associated with this parameter block, then ParameterBlockTangentSize =
+  // ParameterBlockSize.
+  int ParameterBlockTangentSize(const double* values) const;
+
   // Is the given parameter block present in this problem or not?
   bool HasParameterBlock(const double* values) const;
 
-  // Fills the passed parameter_blocks vector with pointers to the
-  // parameter blocks currently in the problem. After this call,
-  // parameter_block.size() == NumParameterBlocks.
+  // Fills the passed parameter_blocks vector with pointers to the parameter
+  // blocks currently in the problem. After this call, parameter_block.size() ==
+  // NumParameterBlocks.
   void GetParameterBlocks(std::vector<double*>* parameter_blocks) const;
 
-  // Fills the passed residual_blocks vector with pointers to the
-  // residual blocks currently in the problem. After this call,
-  // residual_blocks.size() == NumResidualBlocks.
+  // Fills the passed residual_blocks vector with pointers to the residual
+  // blocks currently in the problem. After this call, residual_blocks.size() ==
+  // NumResidualBlocks.
   void GetResidualBlocks(std::vector<ResidualBlockId>* residual_blocks) const;
 
   // Get all the parameter blocks that depend on the given residual block.
@@ -397,10 +508,10 @@
 
   // Get all the residual blocks that depend on the given parameter block.
   //
-  // If Problem::Options::enable_fast_removal is true, then
-  // getting the residual blocks is fast and depends only on the number of
-  // residual blocks. Otherwise, getting the residual blocks for a parameter
-  // block will incur a scan of the entire Problem object.
+  // If Problem::Options::enable_fast_removal is true, then getting the residual
+  // blocks is fast and depends only on the number of residual
+  // blocks. Otherwise, getting the residual blocks for a parameter block will
+  // incur a scan of the entire Problem object.
   void GetResidualBlocksForParameterBlock(
       const double* values,
       std::vector<ResidualBlockId>* residual_blocks) const;
@@ -408,49 +519,45 @@
   // Options struct to control Problem::Evaluate.
   struct EvaluateOptions {
     // The set of parameter blocks for which evaluation should be
-    // performed. This vector determines the order that parameter
-    // blocks occur in the gradient vector and in the columns of the
-    // jacobian matrix. If parameter_blocks is empty, then it is
-    // assumed to be equal to vector containing ALL the parameter
-    // blocks.  Generally speaking the parameter blocks will occur in
-    // the order in which they were added to the problem. But, this
-    // may change if the user removes any parameter blocks from the
-    // problem.
+    // performed. This vector determines the order that parameter blocks occur
+    // in the gradient vector and in the columns of the jacobian matrix. If
+    // parameter_blocks is empty, then it is assumed to be equal to vector
+    // containing ALL the parameter blocks. Generally speaking the parameter
+    // blocks will occur in the order in which they were added to the
+    // problem. But, this may change if the user removes any parameter blocks
+    // from the problem.
     //
-    // NOTE: This vector should contain the same pointers as the ones
-    // used to add parameter blocks to the Problem. These parameter
-    // block should NOT point to new memory locations. Bad things will
-    // happen otherwise.
+    // NOTE: This vector should contain the same pointers as the ones used to
+    // add parameter blocks to the Problem. These parameter block should NOT
+    // point to new memory locations. Bad things will happen otherwise.
     std::vector<double*> parameter_blocks;
 
-    // The set of residual blocks to evaluate. This vector determines
-    // the order in which the residuals occur, and how the rows of the
-    // jacobian are ordered. If residual_blocks is empty, then it is
-    // assumed to be equal to the vector containing ALL the residual
-    // blocks. Generally speaking the residual blocks will occur in
-    // the order in which they were added to the problem. But, this
-    // may change if the user removes any residual blocks from the
-    // problem.
+    // The set of residual blocks to evaluate. This vector determines the order
+    // in which the residuals occur, and how the rows of the jacobian are
+    // ordered. If residual_blocks is empty, then it is assumed to be equal to
+    // the vector containing ALL the residual blocks. Generally speaking the
+    // residual blocks will occur in the order in which they were added to the
+    // problem. But, this may change if the user removes any residual blocks
+    // from the problem.
     std::vector<ResidualBlockId> residual_blocks;
 
     // Even though the residual blocks in the problem may contain loss
-    // functions, setting apply_loss_function to false will turn off
-    // the application of the loss function to the output of the cost
-    // function. This is of use for example if the user wishes to
-    // analyse the solution quality by studying the distribution of
-    // residuals before and after the solve.
+    // functions, setting apply_loss_function to false will turn off the
+    // application of the loss function to the output of the cost function. This
+    // is of use for example if the user wishes to analyse the solution quality
+    // by studying the distribution of residuals before and after the solve.
     bool apply_loss_function = true;
 
     int num_threads = 1;
   };
 
-  // Evaluate Problem. Any of the output pointers can be nullptr. Which
-  // residual blocks and parameter blocks are used is controlled by
-  // the EvaluateOptions struct above.
+  // Evaluate Problem. Any of the output pointers can be nullptr. Which residual
+  // blocks and parameter blocks are used is controlled by the EvaluateOptions
+  // struct above.
   //
-  // Note 1: The evaluation will use the values stored in the memory
-  // locations pointed to by the parameter block pointers used at the
-  // time of the construction of the problem. i.e.,
+  // Note 1: The evaluation will use the values stored in the memory locations
+  // pointed to by the parameter block pointers used at the time of the
+  // construction of the problem. i.e.,
   //
   //   Problem problem;
   //   double x = 1;
@@ -460,8 +567,8 @@
   //   problem.Evaluate(Problem::EvaluateOptions(), &cost,
   //                    nullptr, nullptr, nullptr);
   //
-  // The cost is evaluated at x = 1. If you wish to evaluate the
-  // problem at x = 2, then
+  // The cost is evaluated at x = 1. If you wish to evaluate the problem at x =
+  // 2, then
   //
   //   x = 2;
   //   problem.Evaluate(Problem::EvaluateOptions(), &cost,
@@ -469,80 +576,75 @@
   //
   // is the way to do so.
   //
-  // Note 2: If no local parameterizations are used, then the size of
-  // the gradient vector (and the number of columns in the jacobian)
-  // is the sum of the sizes of all the parameter blocks. If a
-  // parameter block has a local parameterization, then it contributes
-  // "LocalSize" entries to the gradient vector (and the number of
-  // columns in the jacobian).
+  // Note 2: If no LocalParameterizations or Manifolds are used, then the size
+  // of the gradient vector (and the number of columns in the jacobian) is the
+  // sum of the sizes of all the parameter blocks. If a parameter block has a
+  // LocalParameterization or Manifold, then it contributes "TangentSize"
+  // entries to the gradient vector (and the number of columns in the jacobian).
   //
-  // Note 3: This function cannot be called while the problem is being
-  // solved, for example it cannot be called from an IterationCallback
-  // at the end of an iteration during a solve.
+  // Note 3: This function cannot be called while the problem is being solved,
+  // for example it cannot be called from an IterationCallback at the end of an
+  // iteration during a solve.
   //
-  // Note 4: If an EvaluationCallback is associated with the problem,
-  // then its PrepareForEvaluation method will be called every time
-  // this method is called with new_point = true.
+  // Note 4: If an EvaluationCallback is associated with the problem, then its
+  // PrepareForEvaluation method will be called every time this method is called
+  // with new_point = true.
   bool Evaluate(const EvaluateOptions& options,
                 double* cost,
                 std::vector<double>* residuals,
                 std::vector<double>* gradient,
                 CRSMatrix* jacobian);
 
-  // Evaluates the residual block, storing the scalar cost in *cost,
-  // the residual components in *residuals, and the jacobians between
-  // the parameters and residuals in jacobians[i], in row-major order.
+  // Evaluates the residual block, storing the scalar cost in *cost, the
+  // residual components in *residuals, and the jacobians between the parameters
+  // and residuals in jacobians[i], in row-major order.
   //
   // If residuals is nullptr, the residuals are not computed.
   //
-  // If jacobians is nullptr, no Jacobians are computed. If
-  // jacobians[i] is nullptr, then the Jacobian for that parameter
-  // block is not computed.
+  // If jacobians is nullptr, no Jacobians are computed. If jacobians[i] is
+  // nullptr, then the Jacobian for that parameter block is not computed.
   //
-  // It is not okay to request the Jacobian w.r.t a parameter block
-  // that is constant.
+  // It is not okay to request the Jacobian w.r.t a parameter block that is
+  // constant.
   //
-  // The return value indicates the success or failure. Even if the
-  // function returns false, the caller should expect the output
-  // memory locations to have been modified.
+  // The return value indicates the success or failure. Even if the function
+  // returns false, the caller should expect the output memory locations to have
+  // been modified.
   //
   // The returned cost and jacobians have had robustification and
-  // local parameterizations applied already; for example, the
-  // jacobian for a 4-dimensional quaternion parameter using the
-  // "QuaternionParameterization" is num_residuals by 3 instead of
-  // num_residuals by 4.
+  // LocalParameterization/Manifold applied already; for example, the jacobian
+  // for a 4-dimensional quaternion parameter using the
+  // "QuaternionParameterization" is num_residuals by 3 instead of num_residuals
+  // by 4.
   //
-  // apply_loss_function as the name implies allows the user to switch
-  // the application of the loss function on and off.
+  // apply_loss_function as the name implies allows the user to switch the
+  // application of the loss function on and off.
   //
   // If an EvaluationCallback is associated with the problem, then its
-  // PrepareForEvaluation method will be called every time this method
-  // is called with new_point = true. This conservatively assumes that
-  // the user may have changed the parameter values since the previous
-  // call to evaluate / solve.  For improved efficiency, and only if
-  // you know that the parameter values have not changed between
-  // calls, see EvaluateResidualBlockAssumingParametersUnchanged().
+  // PrepareForEvaluation method will be called every time this method is called
+  // with new_point = true. This conservatively assumes that the user may have
+  // changed the parameter values since the previous call to evaluate / solve.
+  // For improved efficiency, and only if you know that the parameter values
+  // have not changed between calls, see
+  // EvaluateResidualBlockAssumingParametersUnchanged().
   bool EvaluateResidualBlock(ResidualBlockId residual_block_id,
                              bool apply_loss_function,
                              double* cost,
                              double* residuals,
                              double** jacobians) const;
 
-  // Same as EvaluateResidualBlock except that if an
-  // EvaluationCallback is associated with the problem, then its
-  // PrepareForEvaluation method will be called every time this method
-  // is called with new_point = false.
+  // Same as EvaluateResidualBlock except that if an EvaluationCallback is
+  // associated with the problem, then its PrepareForEvaluation method will be
+  // called every time this method is called with new_point = false.
   //
-  // This means, if an EvaluationCallback is associated with the
-  // problem then it is the user's responsibility to call
-  // PrepareForEvaluation before calling this method if necessary,
-  // i.e. iff the parameter values have been changed since the last
-  // call to evaluate / solve.'
+  // This means, if an EvaluationCallback is associated with the problem then it
+  // is the user's responsibility to call PrepareForEvaluation before calling
+  // this method if necessary, i.e. iff the parameter values have been changed
+  // since the last call to evaluate / solve.'
   //
-  // This is because, as the name implies, we assume that the
-  // parameter blocks did not change since the last time
-  // PrepareForEvaluation was called (via Solve, Evaluate or
-  // EvaluateResidualBlock).
+  // This is because, as the name implies, we assume that the parameter blocks
+  // did not change since the last time PrepareForEvaluation was called (via
+  // Solve, Evaluate or EvaluateResidualBlock).
   bool EvaluateResidualBlockAssumingParametersUnchanged(
       ResidualBlockId residual_block_id,
       bool apply_loss_function,
diff --git a/include/ceres/version.h b/include/ceres/version.h
index a76cc10..557847e 100644
--- a/include/ceres/version.h
+++ b/include/ceres/version.h
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -32,7 +32,7 @@
 #define CERES_PUBLIC_VERSION_H_
 
 #define CERES_VERSION_MAJOR 2
-#define CERES_VERSION_MINOR 0
+#define CERES_VERSION_MINOR 1
 #define CERES_VERSION_REVISION 0
 
 // Classic CPP stringifcation; the extra level of indirection allows the
@@ -41,9 +41,9 @@
 #define CERES_TO_STRING(x) CERES_TO_STRING_HELPER(x)
 
 // The Ceres version as a string; for example "1.9.0".
-#define CERES_VERSION_STRING                                    \
-  CERES_TO_STRING(CERES_VERSION_MAJOR)                          \
-  "." CERES_TO_STRING(CERES_VERSION_MINOR) "." CERES_TO_STRING( \
-      CERES_VERSION_REVISION)
+#define CERES_VERSION_STRING                                       \
+  CERES_TO_STRING(CERES_VERSION_MAJOR) "."                         \
+      CERES_TO_STRING(CERES_VERSION_MINOR) "."                     \
+      CERES_TO_STRING(CERES_VERSION_REVISION)
 
 #endif  // CERES_PUBLIC_VERSION_H_
diff --git a/internal/ceres/block_jacobian_writer.cc b/internal/ceres/block_jacobian_writer.cc
index 17c157b..e589c34 100644
--- a/internal/ceres/block_jacobian_writer.cc
+++ b/internal/ceres/block_jacobian_writer.cc
@@ -78,7 +78,7 @@
         // Only count blocks for active parameters.
         num_jacobian_blocks++;
         if (parameter_block->index() < num_eliminate_blocks) {
-          f_block_pos += num_residuals * parameter_block->LocalSize();
+          f_block_pos += num_residuals * parameter_block->TangentSize();
         }
       }
     }
@@ -107,7 +107,7 @@
         continue;
       }
       const int jacobian_block_size =
-          num_residuals * parameter_block->LocalSize();
+          num_residuals * parameter_block->TangentSize();
       if (parameter_block_index < num_eliminate_blocks) {
         *jacobian_pos = e_block_pos;
         e_block_pos += jacobian_block_size;
@@ -159,7 +159,7 @@
   for (int i = 0, cursor = 0; i < parameter_blocks.size(); ++i) {
     CHECK_NE(parameter_blocks[i]->index(), -1);
     CHECK(!parameter_blocks[i]->IsConstant());
-    bs->cols[i].size = parameter_blocks[i]->LocalSize();
+    bs->cols[i].size = parameter_blocks[i]->TangentSize();
     bs->cols[i].position = cursor;
     cursor += bs->cols[i].size;
   }
diff --git a/internal/ceres/compressed_row_jacobian_writer.cc b/internal/ceres/compressed_row_jacobian_writer.cc
index 8e7e3e7..c6a3001 100644
--- a/internal/ceres/compressed_row_jacobian_writer.cc
+++ b/internal/ceres/compressed_row_jacobian_writer.cc
@@ -55,7 +55,7 @@
   vector<int>& col_blocks = *(jacobian->mutable_col_blocks());
   col_blocks.resize(parameter_blocks.size());
   for (int i = 0; i < parameter_blocks.size(); ++i) {
-    col_blocks[i] = parameter_blocks[i]->LocalSize();
+    col_blocks[i] = parameter_blocks[i]->TangentSize();
   }
 
   const vector<ResidualBlock*>& residual_blocks = program->residual_blocks();
@@ -99,7 +99,7 @@
     for (int j = 0; j < num_parameter_blocks; ++j) {
       ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
       if (!parameter_block->IsConstant()) {
-        num_jacobian_nonzeros += num_residuals * parameter_block->LocalSize();
+        num_jacobian_nonzeros += num_residuals * parameter_block->TangentSize();
       }
     }
   }
@@ -132,7 +132,7 @@
       ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
       if (!parameter_block->IsConstant()) {
         parameter_indices.push_back(parameter_block->index());
-        num_derivatives += parameter_block->LocalSize();
+        num_derivatives += parameter_block->TangentSize();
       }
     }
 
@@ -166,7 +166,7 @@
     for (int j = 0; j < parameter_indices.size(); ++j) {
       ParameterBlock* parameter_block =
           program_->parameter_blocks()[parameter_indices[j]];
-      const int parameter_block_size = parameter_block->LocalSize();
+      const int parameter_block_size = parameter_block->TangentSize();
 
       for (int r = 0; r < num_residuals; ++r) {
         // This is the position in the values array of the jacobian where this
@@ -214,7 +214,7 @@
     const ParameterBlock* parameter_block =
         program_->parameter_blocks()[evaluated_jacobian_blocks[i].first];
     const int argument = evaluated_jacobian_blocks[i].second;
-    const int parameter_block_size = parameter_block->LocalSize();
+    const int parameter_block_size = parameter_block->TangentSize();
 
     // Copy one row of the jacobian block at a time.
     for (int r = 0; r < num_residuals; ++r) {
diff --git a/internal/ceres/covariance_impl.cc b/internal/ceres/covariance_impl.cc
index 93cf41b..2180845 100644
--- a/internal/ceres/covariance_impl.cc
+++ b/internal/ceres/covariance_impl.cc
@@ -162,10 +162,10 @@
 
     const int block1_size = block1->Size();
     const int block2_size = block2->Size();
-    const int block1_local_size = block1->LocalSize();
-    const int block2_local_size = block2->LocalSize();
+    const int block1_tangent_size = block1->TangentSize();
+    const int block2_tangent_size = block2->TangentSize();
     if (!lift_covariance_to_ambient_space) {
-      MatrixRef(covariance_block, block1_local_size, block2_local_size)
+      MatrixRef(covariance_block, block1_tangent_size, block2_tangent_size)
           .setZero();
     } else {
       MatrixRef(covariance_block, block1_size, block2_size).setZero();
@@ -209,35 +209,34 @@
       FindOrDie(parameter_map, const_cast<double*>(parameter_block1));
   ParameterBlock* block2 =
       FindOrDie(parameter_map, const_cast<double*>(parameter_block2));
-  const LocalParameterization* local_param1 = block1->local_parameterization();
-  const LocalParameterization* local_param2 = block2->local_parameterization();
+  const Manifold* manifold1 = block1->manifold();
+  const Manifold* manifold2 = block2->manifold();
   const int block1_size = block1->Size();
-  const int block1_local_size = block1->LocalSize();
+  const int block1_tangent_size = block1->TangentSize();
   const int block2_size = block2->Size();
-  const int block2_local_size = block2->LocalSize();
+  const int block2_tangent_size = block2->TangentSize();
 
   ConstMatrixRef cov(covariance_matrix_->values() + rows[row_begin],
-                     block1_local_size,
+                     block1_tangent_size,
                      row_size);
 
-  // Fast path when there are no local parameterizations or if the
-  // user does not want it lifted to the ambient space.
-  if ((local_param1 == NULL && local_param2 == NULL) ||
+  // Fast path when there are no manifolds or if the user does not want it
+  // lifted to the ambient space.
+  if ((manifold1 == NULL && manifold2 == NULL) ||
       !lift_covariance_to_ambient_space) {
     if (transpose) {
-      MatrixRef(covariance_block, block2_local_size, block1_local_size) =
-          cov.block(0, offset, block1_local_size, block2_local_size)
+      MatrixRef(covariance_block, block2_tangent_size, block1_tangent_size) =
+          cov.block(0, offset, block1_tangent_size, block2_tangent_size)
               .transpose();
     } else {
-      MatrixRef(covariance_block, block1_local_size, block2_local_size) =
-          cov.block(0, offset, block1_local_size, block2_local_size);
+      MatrixRef(covariance_block, block1_tangent_size, block2_tangent_size) =
+          cov.block(0, offset, block1_tangent_size, block2_tangent_size);
     }
     return true;
   }
 
-  // If local parameterizations are used then the covariance that has
-  // been computed is in the tangent space and it needs to be lifted
-  // back to the ambient space.
+  // If manifolds are used then the covariance that has been computed is in the
+  // tangent space and it needs to be lifted back to the ambient space.
   //
   // This is given by the formula
   //
@@ -250,36 +249,37 @@
   // See Result 5.11 on page 142 of Hartley & Zisserman (2nd Edition)
   // for a proof.
   //
-  // TODO(sameeragarwal): Add caching of local parameterization, so
-  // that they are computed just once per parameter block.
-  Matrix block1_jacobian(block1_size, block1_local_size);
-  if (local_param1 == NULL) {
+  // TODO(sameeragarwal): Add caching the manifold plus_jacobian, so that they
+  // are computed just once per parameter block.
+  Matrix block1_jacobian(block1_size, block1_tangent_size);
+  if (manifold1 == NULL) {
     block1_jacobian.setIdentity();
   } else {
-    local_param1->ComputeJacobian(parameter_block1, block1_jacobian.data());
+    manifold1->PlusJacobian(parameter_block1, block1_jacobian.data());
   }
 
-  Matrix block2_jacobian(block2_size, block2_local_size);
+  Matrix block2_jacobian(block2_size, block2_tangent_size);
   // Fast path if the user is requesting a diagonal block.
   if (parameter_block1 == parameter_block2) {
     block2_jacobian = block1_jacobian;
   } else {
-    if (local_param2 == NULL) {
+    if (manifold2 == NULL) {
       block2_jacobian.setIdentity();
     } else {
-      local_param2->ComputeJacobian(parameter_block2, block2_jacobian.data());
+      manifold2->PlusJacobian(parameter_block2, block2_jacobian.data());
     }
   }
 
   if (transpose) {
     MatrixRef(covariance_block, block2_size, block1_size) =
         block2_jacobian *
-        cov.block(0, offset, block1_local_size, block2_local_size).transpose() *
+        cov.block(0, offset, block1_tangent_size, block2_tangent_size)
+            .transpose() *
         block1_jacobian.transpose();
   } else {
     MatrixRef(covariance_block, block1_size, block2_size) =
         block1_jacobian *
-        cov.block(0, offset, block1_local_size, block2_local_size) *
+        cov.block(0, offset, block1_tangent_size, block2_tangent_size) *
         block2_jacobian.transpose();
   }
 
@@ -310,7 +310,7 @@
     if (lift_covariance_to_ambient_space) {
       parameter_sizes.push_back(block->Size());
     } else {
-      parameter_sizes.push_back(block->LocalSize());
+      parameter_sizes.push_back(block->TangentSize());
     }
   }
   std::partial_sum(parameter_sizes.begin(),
@@ -415,7 +415,7 @@
   for (int i = 0; i < active_parameter_blocks.size(); ++i) {
     double* parameter_block = active_parameter_blocks[i];
     const int parameter_block_size =
-        problem->ParameterBlockLocalSize(parameter_block);
+        problem->ParameterBlockTangentSize(parameter_block);
     parameter_block_to_row_index_[parameter_block] = num_rows;
     num_rows += parameter_block_size;
   }
@@ -435,8 +435,8 @@
 
     int index1 = FindOrDie(parameter_block_to_row_index_, block_pair.first);
     int index2 = FindOrDie(parameter_block_to_row_index_, block_pair.second);
-    const int size1 = problem->ParameterBlockLocalSize(block_pair.first);
-    const int size2 = problem->ParameterBlockLocalSize(block_pair.second);
+    const int size1 = problem->ParameterBlockTangentSize(block_pair.first);
+    const int size2 = problem->ParameterBlockTangentSize(block_pair.second);
     num_nonzeros += size1 * size2;
 
     // Make sure we are constructing a block upper triangular matrix.
@@ -481,7 +481,7 @@
   int cursor = 0;  // index into the covariance matrix.
   for (const auto& entry : parameter_block_to_row_index_) {
     const double* row_block = entry.first;
-    const int row_block_size = problem->ParameterBlockLocalSize(row_block);
+    const int row_block_size = problem->ParameterBlockTangentSize(row_block);
     int row_begin = entry.second;
 
     // Iterate over the covariance blocks contained in this row block
@@ -494,7 +494,7 @@
       if (block_pair.first != row_block) {
         break;
       }
-      num_columns += problem->ParameterBlockLocalSize(block_pair.second);
+      num_columns += problem->ParameterBlockTangentSize(block_pair.second);
     }
 
     // Fill out all the compressed rows for this parameter block.
@@ -502,7 +502,8 @@
       rows[row_begin + r] = cursor;
       for (int c = 0; c < num_col_blocks; ++c) {
         const double* col_block = covariance_blocks[i + c].second;
-        const int col_block_size = problem->ParameterBlockLocalSize(col_block);
+        const int col_block_size =
+            problem->ParameterBlockTangentSize(col_block);
         int col_begin = FindOrDie(parameter_block_to_row_index_, col_block);
         for (int k = 0; k < col_block_size; ++k) {
           cols[cursor++] = col_begin++;
diff --git a/internal/ceres/covariance_test.cc b/internal/ceres/covariance_test.cc
index 229173f..1f3a75a 100644
--- a/internal/ceres/covariance_test.cc
+++ b/internal/ceres/covariance_test.cc
@@ -42,6 +42,7 @@
 #include "ceres/cost_function.h"
 #include "ceres/covariance_impl.h"
 #include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 #include "ceres/map_util.h"
 #include "ceres/problem_impl.h"
 #include "gtest/gtest.h"
@@ -129,29 +130,6 @@
   vector<double> jacobian2_;
 };
 
-// x_plus_delta = delta * x;
-class PolynomialParameterization : public LocalParameterization {
- public:
-  virtual ~PolynomialParameterization() {}
-
-  bool Plus(const double* x,
-            const double* delta,
-            double* x_plus_delta) const final {
-    x_plus_delta[0] = delta[0] * x[0];
-    x_plus_delta[1] = delta[0] * x[1];
-    return true;
-  }
-
-  bool ComputeJacobian(const double* x, double* jacobian) const final {
-    jacobian[0] = x[0];
-    jacobian[1] = x[1];
-    return true;
-  }
-
-  int GlobalSize() const final { return 2; }
-  int LocalSize() const final { return 1; }
-};
-
 TEST(CovarianceImpl, ComputeCovarianceSparsity) {
   double parameters[10];
 
@@ -398,6 +376,62 @@
   }
 }
 
+// x_plus_delta = delta * x;
+class PolynomialParameterization : public LocalParameterization {
+ public:
+  virtual ~PolynomialParameterization() {}
+
+  bool Plus(const double* x,
+            const double* delta,
+            double* x_plus_delta) const final {
+    x_plus_delta[0] = delta[0] * x[0];
+    x_plus_delta[1] = delta[0] * x[1];
+    return true;
+  }
+
+  bool ComputeJacobian(const double* x, double* jacobian) const final {
+    jacobian[0] = x[0];
+    jacobian[1] = x[1];
+    return true;
+  }
+
+  int GlobalSize() const final { return 2; }
+  int LocalSize() const final { return 1; }
+};
+
+// x_plus_delta = delta * x;
+class PolynomialManifold : public Manifold {
+ public:
+  virtual ~PolynomialManifold() {}
+
+  bool Plus(const double* x,
+            const double* delta,
+            double* x_plus_delta) const final {
+    x_plus_delta[0] = delta[0] * x[0];
+    x_plus_delta[1] = delta[0] * x[1];
+    return true;
+  }
+
+  bool Minus(const double* y, const double* x, double* y_minus_x) const {
+    LOG(FATAL) << "Should not be called";
+    return true;
+  }
+
+  bool PlusJacobian(const double* x, double* jacobian) const final {
+    jacobian[0] = x[0];
+    jacobian[1] = x[1];
+    return true;
+  }
+
+  bool MinusJacobian(const double* x, double* jacobian) const final {
+    LOG(FATAL) << "Should not be called";
+    return true;
+  }
+
+  int AmbientSize() const final { return 2; }
+  int TangentSize() const final { return 1; }
+};
+
 class CovarianceTest : public ::testing::Test {
  protected:
   typedef map<const double*, pair<int, int>> BoundsMap;
@@ -920,6 +954,193 @@
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 }
 
+TEST_F(CovarianceTest, Manifold) {
+  double* x = parameters_;
+  double* y = x + 2;
+
+  problem_.SetManifold(x, new PolynomialManifold);
+
+  vector<int> subset;
+  subset.push_back(2);
+  problem_.SetManifold(y, new SubsetManifold(3, subset));
+
+  // Raw Jacobian: J
+  //
+  //   1   0  0  0  0  0
+  //   0   1  0  0  0  0
+  //   0   0  2  0  0  0
+  //   0   0  0  2  0  0
+  //   0   0  0  0  2  0
+  //   0   0  0  0  0  5
+  //  -5  -6  1  2  3  0
+  //   3  -2  0  0  0  2
+
+  // Local to global jacobian: A
+  //
+  //  1   0   0   0
+  //  1   0   0   0
+  //  0   1   0   0
+  //  0   0   1   0
+  //  0   0   0   0
+  //  0   0   0   1
+
+  // A * inv((J*A)'*(J*A)) * A'
+  // Computed using octave.
+  // clang-format off
+  double expected_covariance[] = {
+    0.01766,   0.01766,   0.02158,   0.04316,   0.00000,  -0.00122,
+    0.01766,   0.01766,   0.02158,   0.04316,   0.00000,  -0.00122,
+    0.02158,   0.02158,   0.24860,  -0.00281,   0.00000,  -0.00149,
+    0.04316,   0.04316,  -0.00281,   0.24439,   0.00000,  -0.00298,
+    0.00000,   0.00000,   0.00000,   0.00000,   0.00000,   0.00000,
+   -0.00122,  -0.00122,  -0.00149,  -0.00298,   0.00000,   0.03457
+  };
+  // clang-format on
+
+  Covariance::Options options;
+
+#ifndef CERES_NO_SUITESPARSE
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+  ComputeAndCompareCovarianceBlocks(options, expected_covariance);
+#endif
+
+  options.algorithm_type = DENSE_SVD;
+  ComputeAndCompareCovarianceBlocks(options, expected_covariance);
+
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+  ComputeAndCompareCovarianceBlocks(options, expected_covariance);
+}
+
+TEST_F(CovarianceTest, ManifoldInTangentSpace) {
+  double* x = parameters_;
+  double* y = x + 2;
+  double* z = y + 3;
+
+  problem_.SetManifold(x, new PolynomialManifold);
+
+  vector<int> subset;
+  subset.push_back(2);
+  problem_.SetManifold(y, new SubsetManifold(3, subset));
+
+  local_column_bounds_[x] = make_pair(0, 1);
+  local_column_bounds_[y] = make_pair(1, 3);
+  local_column_bounds_[z] = make_pair(3, 4);
+
+  // Raw Jacobian: J
+  //
+  //   1   0  0  0  0  0
+  //   0   1  0  0  0  0
+  //   0   0  2  0  0  0
+  //   0   0  0  2  0  0
+  //   0   0  0  0  2  0
+  //   0   0  0  0  0  5
+  //  -5  -6  1  2  3  0
+  //   3  -2  0  0  0  2
+
+  // Local to global jacobian: A
+  //
+  //  1   0   0   0
+  //  1   0   0   0
+  //  0   1   0   0
+  //  0   0   1   0
+  //  0   0   0   0
+  //  0   0   0   1
+
+  // inv((J*A)'*(J*A))
+  // Computed using octave.
+  // clang-format off
+  double expected_covariance[] = {
+    0.01766,   0.02158,   0.04316,   -0.00122,
+    0.02158,   0.24860,  -0.00281,   -0.00149,
+    0.04316,  -0.00281,   0.24439,   -0.00298,
+   -0.00122,  -0.00149,  -0.00298,    0.03457  // NOLINT
+  };
+  // clang-format on
+
+  Covariance::Options options;
+
+#ifndef CERES_NO_SUITESPARSE
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+#endif
+
+  options.algorithm_type = DENSE_SVD;
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+}
+
+TEST_F(CovarianceTest, ManifoldInTangentSpaceWithConstantBlocks) {
+  double* x = parameters_;
+  double* y = x + 2;
+  double* z = y + 3;
+
+  problem_.SetManifold(x, new PolynomialManifold);
+  problem_.SetParameterBlockConstant(x);
+
+  vector<int> subset;
+  subset.push_back(2);
+  problem_.SetManifold(y, new SubsetManifold(3, subset));
+  problem_.SetParameterBlockConstant(y);
+
+  local_column_bounds_[x] = make_pair(0, 1);
+  local_column_bounds_[y] = make_pair(1, 3);
+  local_column_bounds_[z] = make_pair(3, 4);
+
+  // Raw Jacobian: J
+  //
+  //   1   0  0  0  0  0
+  //   0   1  0  0  0  0
+  //   0   0  2  0  0  0
+  //   0   0  0  2  0  0
+  //   0   0  0  0  2  0
+  //   0   0  0  0  0  5
+  //  -5  -6  1  2  3  0
+  //   3  -2  0  0  0  2
+
+  // Local to global jacobian: A
+  //
+  //  0   0   0   0
+  //  0   0   0   0
+  //  0   0   0   0
+  //  0   0   0   0
+  //  0   0   0   0
+  //  0   0   0   1
+
+  // pinv((J*A)'*(J*A))
+  // Computed using octave.
+  // clang-format off
+  double expected_covariance[] = {
+    0.0, 0.0, 0.0, 0.0,
+    0.0, 0.0, 0.0, 0.0,
+    0.0, 0.0, 0.0, 0.0,
+    0.0, 0.0, 0.0, 0.034482 // NOLINT
+  };
+  // clang-format on
+
+  Covariance::Options options;
+
+#ifndef CERES_NO_SUITESPARSE
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+#endif
+
+  options.algorithm_type = DENSE_SVD;
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+}
+
 TEST_F(CovarianceTest, TruncatedRank) {
   // J
   //
@@ -1031,7 +1252,9 @@
   ComputeAndCompareCovarianceBlocks(options, expected_covariance);
 }
 
-TEST_F(CovarianceTest, DenseCovarianceMatrixFromSetOfParametersInTangentSpace) {
+TEST_F(
+    CovarianceTest,
+    DenseCovarianceMatrixFromSetOfParametersInTangentSpaceUsingLocalParameterizations) {
   Covariance::Options options;
   Covariance covariance(options);
   double* x = parameters_;
@@ -1072,6 +1295,47 @@
   ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
 }
 
+TEST_F(CovarianceTest, DenseCovarianceMatrixFromSetOfParametersInTangentSpace) {
+  Covariance::Options options;
+  Covariance covariance(options);
+  double* x = parameters_;
+  double* y = x + 2;
+  double* z = y + 3;
+
+  problem_.SetManifold(x, new PolynomialManifold);
+
+  vector<int> subset;
+  subset.push_back(2);
+  problem_.SetManifold(y, new SubsetManifold(3, subset));
+
+  local_column_bounds_[x] = make_pair(0, 1);
+  local_column_bounds_[y] = make_pair(1, 3);
+  local_column_bounds_[z] = make_pair(3, 4);
+
+  vector<const double*> parameter_blocks;
+  parameter_blocks.push_back(x);
+  parameter_blocks.push_back(y);
+  parameter_blocks.push_back(z);
+  covariance.Compute(parameter_blocks, &problem_);
+  double expected_covariance[16];
+  covariance.GetCovarianceMatrixInTangentSpace(parameter_blocks,
+                                               expected_covariance);
+
+#ifndef CERES_NO_SUITESPARSE
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+#endif
+
+  options.algorithm_type = DENSE_SVD;
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+
+  options.algorithm_type = SPARSE_QR;
+  options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+  ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
+}
+
 TEST_F(CovarianceTest, ComputeCovarianceFailure) {
   Covariance::Options options;
   Covariance covariance(options);
@@ -1265,6 +1529,74 @@
   EXPECT_EQ(value, -1);
 }
 
+TEST(Covariance, ZeroSizedManifoldGetCovariance) {
+  double x = 0.0;
+  double y = 1.0;
+  Problem problem;
+  problem.AddResidualBlock(LinearCostFunction::Create(), nullptr, &x, &y);
+  problem.SetManifold(&y, new SubsetManifold(1, {0}));
+  // J = [-1 0]
+  //     [ 0 0]
+  Covariance::Options options;
+  options.algorithm_type = DENSE_SVD;
+  Covariance covariance(options);
+  vector<pair<const double*, const double*>> covariance_blocks;
+  covariance_blocks.push_back(std::make_pair(&x, &x));
+  covariance_blocks.push_back(std::make_pair(&x, &y));
+  covariance_blocks.push_back(std::make_pair(&y, &x));
+  covariance_blocks.push_back(std::make_pair(&y, &y));
+  EXPECT_TRUE(covariance.Compute(covariance_blocks, &problem));
+
+  double value = -1;
+  covariance.GetCovarianceBlock(&x, &x, &value);
+  EXPECT_NEAR(value, 1.0, std::numeric_limits<double>::epsilon());
+
+  value = -1;
+  covariance.GetCovarianceBlock(&x, &y, &value);
+  EXPECT_NEAR(value, 0.0, std::numeric_limits<double>::epsilon());
+
+  value = -1;
+  covariance.GetCovarianceBlock(&y, &x, &value);
+  EXPECT_NEAR(value, 0.0, std::numeric_limits<double>::epsilon());
+
+  value = -1;
+  covariance.GetCovarianceBlock(&y, &y, &value);
+  EXPECT_NEAR(value, 0.0, std::numeric_limits<double>::epsilon());
+}
+
+TEST(Covariance, ZeroSizedManifoldGetCovarianceInTangentSpace) {
+  double x = 0.0;
+  double y = 1.0;
+  Problem problem;
+  problem.AddResidualBlock(LinearCostFunction::Create(), nullptr, &x, &y);
+  problem.SetManifold(&y, new SubsetManifold(1, {0}));
+  // J = [-1 0]
+  //     [ 0 0]
+  Covariance::Options options;
+  options.algorithm_type = DENSE_SVD;
+  Covariance covariance(options);
+  vector<pair<const double*, const double*>> covariance_blocks;
+  covariance_blocks.push_back(std::make_pair(&x, &x));
+  covariance_blocks.push_back(std::make_pair(&x, &y));
+  covariance_blocks.push_back(std::make_pair(&y, &x));
+  covariance_blocks.push_back(std::make_pair(&y, &y));
+  EXPECT_TRUE(covariance.Compute(covariance_blocks, &problem));
+
+  double value = -1;
+  covariance.GetCovarianceBlockInTangentSpace(&x, &x, &value);
+  EXPECT_NEAR(value, 1.0, std::numeric_limits<double>::epsilon());
+
+  value = -1;
+  // The following three calls, should not touch this value, since the
+  // tangent space is of size zero
+  covariance.GetCovarianceBlockInTangentSpace(&x, &y, &value);
+  EXPECT_EQ(value, -1);
+  covariance.GetCovarianceBlockInTangentSpace(&y, &x, &value);
+  EXPECT_EQ(value, -1);
+  covariance.GetCovarianceBlockInTangentSpace(&y, &y, &value);
+  EXPECT_EQ(value, -1);
+}
+
 class LargeScaleCovarianceTest : public ::testing::Test {
  protected:
   void SetUp() final {
diff --git a/internal/ceres/dense_jacobian_writer.h b/internal/ceres/dense_jacobian_writer.h
index 28c60e2..1d7187c 100644
--- a/internal/ceres/dense_jacobian_writer.h
+++ b/internal/ceres/dense_jacobian_writer.h
@@ -82,7 +82,7 @@
         continue;
       }
 
-      const int parameter_block_size = parameter_block->LocalSize();
+      const int parameter_block_size = parameter_block->TangentSize();
       ConstMatrixRef parameter_jacobian(
           jacobians[j], num_residuals, parameter_block_size);
 
diff --git a/internal/ceres/dynamic_compressed_row_jacobian_writer.cc b/internal/ceres/dynamic_compressed_row_jacobian_writer.cc
index 1749449..9f709a3 100644
--- a/internal/ceres/dynamic_compressed_row_jacobian_writer.cc
+++ b/internal/ceres/dynamic_compressed_row_jacobian_writer.cc
@@ -82,7 +82,7 @@
         program_->parameter_blocks()[evaluated_jacobian_blocks[i].first];
     const int parameter_block_jacobian_index =
         evaluated_jacobian_blocks[i].second;
-    const int parameter_block_size = parameter_block->LocalSize();
+    const int parameter_block_size = parameter_block->TangentSize();
     const double* parameter_jacobian =
         jacobians[parameter_block_jacobian_index];
 
diff --git a/internal/ceres/evaluator.h b/internal/ceres/evaluator.h
index 9cf4259..5c9cd56 100644
--- a/internal/ceres/evaluator.h
+++ b/internal/ceres/evaluator.h
@@ -54,7 +54,7 @@
 // The Evaluator interface offers a way to interact with a least squares cost
 // function that is useful for an optimizer that wants to minimize the least
 // squares objective. This insulates the optimizer from issues like Jacobian
-// storage, parameterization, etc.
+// storage, manifolds, etc.
 class CERES_EXPORT_INTERNAL Evaluator {
  public:
   virtual ~Evaluator();
@@ -131,13 +131,13 @@
   // Make a change delta (of size NumEffectiveParameters()) to state (of size
   // NumParameters()) and store the result in state_plus_delta.
   //
-  // In the case that there are no parameterizations used, this is equivalent to
+  // In the case that there are no manifolds used, this is equivalent to
   //
   //   state_plus_delta[i] = state[i] + delta[i] ;
   //
-  // however, the mapping is more complicated in the case of parameterizations
+  // however, the mapping is more complicated in the case of manifolds
   // like quaternions. This is the same as the "Plus()" operation in
-  // local_parameterization.h, but operating over the entire state vector for a
+  // manifold.h, but operating over the entire state vector for a
   // problem.
   virtual bool Plus(const double* state,
                     const double* delta,
@@ -147,7 +147,7 @@
   virtual int NumParameters() const = 0;
 
   // This is the effective number of parameters that the optimizer may adjust.
-  // This applies when there are parameterizations on some of the parameters.
+  // This applies when there are manifolds on some of the parameters.
   virtual int NumEffectiveParameters() const = 0;
 
   // The number of residuals in the optimization problem.
diff --git a/internal/ceres/evaluator_test.cc b/internal/ceres/evaluator_test.cc
index 5ddb733..7aaf1b0 100644
--- a/internal/ceres/evaluator_test.cc
+++ b/internal/ceres/evaluator_test.cc
@@ -41,6 +41,7 @@
 #include "ceres/evaluator_test_utils.h"
 #include "ceres/internal/eigen.h"
 #include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 #include "ceres/problem_impl.h"
 #include "ceres/program.h"
 #include "ceres/sized_cost_function.h"
@@ -75,9 +76,9 @@
       for (int k = 0; k < Base::parameter_block_sizes().size(); ++k) {
         // The jacobians here are full sized, but they are transformed in the
         // evaluator into the "local" jacobian. In the tests, the "subset
-        // constant" parameterization is used, which should pick out columns
-        // from these jacobians. Put values in the jacobian that make this
-        // obvious; in particular, make the jacobians like this:
+        // constant" manifold is used, which should pick out columns from these
+        // jacobians. Put values in the jacobian that make this obvious; in
+        // particular, make the jacobians like this:
         //
         //   1 2 3 4 ...
         //   1 2 3 4 ...   .*  kFactor
@@ -456,6 +457,68 @@
   CheckAllEvaluationCombinations(expected);
 }
 
+TEST_P(EvaluatorTest, MultipleResidualsWithManifolds) {
+  // Add the parameters in explicit order to force the ordering in the program.
+  problem.AddParameterBlock(x, 2);
+
+  // Fix y's first dimension.
+  vector<int> y_fixed;
+  y_fixed.push_back(0);
+  problem.AddParameterBlock(y, 3, new SubsetManifold(3, y_fixed));
+
+  // Fix z's second dimension.
+  vector<int> z_fixed;
+  z_fixed.push_back(1);
+  problem.AddParameterBlock(z, 4, new SubsetManifold(4, z_fixed));
+
+  // f(x, y) in R^2
+  problem.AddResidualBlock(
+      new ParameterIgnoringCostFunction<1, 2, 2, 3>, nullptr, x, y);
+
+  // g(x, z) in R^3
+  problem.AddResidualBlock(
+      new ParameterIgnoringCostFunction<2, 3, 2, 4>, nullptr, x, z);
+
+  // h(y, z) in R^4
+  problem.AddResidualBlock(
+      new ParameterIgnoringCostFunction<3, 4, 3, 4>, nullptr, y, z);
+
+  // clang-format off
+  ExpectedEvaluation expected = {
+    // Rows/columns
+    9, 7,
+    // Cost
+    // f       g           h
+    (  1 + 4 + 1 + 4 + 9 + 1 + 4 + 9 + 16) / 2.0,
+    // Residuals
+    { 1.0, 2.0,           // f
+      1.0, 2.0, 3.0,      // g
+      1.0, 2.0, 3.0, 4.0  // h
+    },
+    // Gradient
+    { 15.0, 30.0,         // x
+      66.0, 99.0,         // y
+      42.0, 126.0, 168.0  // z
+    },
+    // Jacobian
+    //                x        y           z
+    {   /* f(x, y) */ 1, 2,    2, 3,    0, 0, 0,
+                      1, 2,    2, 3,    0, 0, 0,
+
+        /* g(x, z) */ 2, 4,    0, 0,    2, 6, 8,
+                      2, 4,    0, 0,    2, 6, 8,
+                      2, 4,    0, 0,    2, 6, 8,
+
+        /* h(y, z) */ 0, 0,    6, 9,    3, 9, 12,
+                      0, 0,    6, 9,    3, 9, 12,
+                      0, 0,    6, 9,    3, 9, 12,
+                      0, 0,    6, 9,    3, 9, 12
+    }
+  };
+  // clang-format on
+  CheckAllEvaluationCombinations(expected);
+}
+
 TEST_P(EvaluatorTest, MultipleResidualProblemWithSomeConstantParameters) {
   // The values are ignored completely by the cost function.
   double x[2];
diff --git a/internal/ceres/gradient_checker.cc b/internal/ceres/gradient_checker.cc
index dadaaa0..6ae7d55 100644
--- a/internal/ceres/gradient_checker.cc
+++ b/internal/ceres/gradient_checker.cc
@@ -40,6 +40,7 @@
 #include <vector>
 
 #include "ceres/is_close.h"
+#include "ceres/manifold_adapter.h"
 #include "ceres/stringprintf.h"
 #include "ceres/types.h"
 
@@ -53,15 +54,13 @@
 
 namespace {
 // Evaluate the cost function and transform the returned Jacobians to
-// the local space of the respective local parameterizations.
-bool EvaluateCostFunction(
-    const ceres::CostFunction* function,
-    double const* const* parameters,
-    const std::vector<const ceres::LocalParameterization*>&
-        local_parameterizations,
-    Vector* residuals,
-    std::vector<Matrix>* jacobians,
-    std::vector<Matrix>* local_jacobians) {
+// the tangent space of the respective local parameterizations.
+bool EvaluateCostFunction(const CostFunction* function,
+                          double const* const* parameters,
+                          const std::vector<const Manifold*>& manifolds,
+                          Vector* residuals,
+                          std::vector<Matrix>* jacobians,
+                          std::vector<Matrix>* local_jacobians) {
   CHECK(residuals != nullptr);
   CHECK(jacobians != nullptr);
   CHECK(local_jacobians != nullptr);
@@ -69,20 +68,20 @@
   const vector<int32_t>& block_sizes = function->parameter_block_sizes();
   const int num_parameter_blocks = block_sizes.size();
 
-  // Allocate Jacobian matrices in local space.
+  // Allocate Jacobian matrices in tangent space.
   local_jacobians->resize(num_parameter_blocks);
   vector<double*> local_jacobian_data(num_parameter_blocks);
   for (int i = 0; i < num_parameter_blocks; ++i) {
     int block_size = block_sizes.at(i);
-    if (local_parameterizations.at(i) != NULL) {
-      block_size = local_parameterizations.at(i)->LocalSize();
+    if (manifolds.at(i) != nullptr) {
+      block_size = manifolds.at(i)->TangentSize();
     }
     local_jacobians->at(i).resize(function->num_residuals(), block_size);
     local_jacobians->at(i).setZero();
     local_jacobian_data.at(i) = local_jacobians->at(i).data();
   }
 
-  // Allocate Jacobian matrices in global space.
+  // Allocate Jacobian matrices in ambient space.
   jacobians->resize(num_parameter_blocks);
   vector<double*> jacobian_data(num_parameter_blocks);
   for (int i = 0; i < num_parameter_blocks; ++i) {
@@ -100,18 +99,17 @@
     return false;
   }
 
-  // Convert Jacobians from global to local space.
+  // Convert Jacobians from ambient to local space.
   for (size_t i = 0; i < local_jacobians->size(); ++i) {
-    if (local_parameterizations.at(i) == NULL) {
+    if (manifolds.at(i) == nullptr) {
       local_jacobians->at(i) = jacobians->at(i);
     } else {
-      int global_size = local_parameterizations.at(i)->GlobalSize();
-      int local_size = local_parameterizations.at(i)->LocalSize();
-      CHECK_EQ(jacobians->at(i).cols(), global_size);
-      Matrix global_J_local(global_size, local_size);
-      local_parameterizations.at(i)->ComputeJacobian(parameters[i],
-                                                     global_J_local.data());
-      local_jacobians->at(i).noalias() = jacobians->at(i) * global_J_local;
+      int ambient_size = manifolds.at(i)->AmbientSize();
+      int tangent_size = manifolds.at(i)->TangentSize();
+      CHECK_EQ(jacobians->at(i).cols(), ambient_size);
+      Matrix ambient_J_tangent(ambient_size, tangent_size);
+      manifolds.at(i)->PlusJacobian(parameters[i], ambient_J_tangent.data());
+      local_jacobians->at(i).noalias() = jacobians->at(i) * ambient_J_tangent;
     }
   }
   return true;
@@ -122,13 +120,44 @@
     const CostFunction* function,
     const vector<const LocalParameterization*>* local_parameterizations,
     const NumericDiffOptions& options)
-    : function_(function) {
+    : delete_manifolds_(true), function_(function) {
   CHECK(function != nullptr);
-  if (local_parameterizations != NULL) {
-    local_parameterizations_ = *local_parameterizations;
+  manifolds_.resize(function->parameter_block_sizes().size(), nullptr);
+
+  // Wrap the local parameterization into manifold objects using
+  // ManifoldAdapter.
+  for (int i = 0; i < manifolds_.size(); ++i) {
+    const LocalParameterization* local_param = local_parameterizations->at(i);
+    if (local_param == nullptr) {
+      continue;
+    }
+    manifolds_[i] = new internal::ManifoldAdapter(local_param);
+  }
+
+  DynamicNumericDiffCostFunction<CostFunction, RIDDERS>*
+      finite_diff_cost_function =
+          new DynamicNumericDiffCostFunction<CostFunction, RIDDERS>(
+              function, DO_NOT_TAKE_OWNERSHIP, options);
+  finite_diff_cost_function_.reset(finite_diff_cost_function);
+
+  const vector<int32_t>& parameter_block_sizes =
+      function->parameter_block_sizes();
+  const int num_parameter_blocks = parameter_block_sizes.size();
+  for (int32_t parameter_block_size : parameter_block_sizes) {
+    finite_diff_cost_function->AddParameterBlock(parameter_block_size);
+  }
+  finite_diff_cost_function->SetNumResiduals(function->num_residuals());
+}
+
+GradientChecker::GradientChecker(const CostFunction* function,
+                                 const vector<const Manifold*>* manifolds,
+                                 const NumericDiffOptions& options)
+    : delete_manifolds_(false), function_(function) {
+  CHECK(function != nullptr);
+  if (manifolds != nullptr) {
+    manifolds_ = *manifolds;
   } else {
-    local_parameterizations_.resize(function->parameter_block_sizes().size(),
-                                    NULL);
+    manifolds_.resize(function->parameter_block_sizes().size(), nullptr);
   }
   DynamicNumericDiffCostFunction<CostFunction, RIDDERS>*
       finite_diff_cost_function =
@@ -145,6 +174,14 @@
   finite_diff_cost_function->SetNumResiduals(function->num_residuals());
 }
 
+GradientChecker::~GradientChecker() {
+  if (delete_manifolds_) {
+    for (const auto m : manifolds_) {
+      delete m;
+    }
+  }
+}
+
 bool GradientChecker::Probe(double const* const* parameters,
                             double relative_precision,
                             ProbeResults* results_param) const {
@@ -154,7 +191,7 @@
   // provided an output argument.
   ProbeResults* results;
   ProbeResults results_local;
-  if (results_param != NULL) {
+  if (results_param != nullptr) {
     results = results_param;
     results->residuals.resize(0);
     results->jacobians.clear();
@@ -173,7 +210,7 @@
   vector<Matrix>& local_jacobians = results->local_jacobians;
   if (!EvaluateCostFunction(function_,
                             parameters,
-                            local_parameterizations_,
+                            manifolds_,
                             &results->residuals,
                             &jacobians,
                             &local_jacobians)) {
@@ -187,7 +224,7 @@
   Vector finite_diff_residuals;
   if (!EvaluateCostFunction(finite_diff_cost_function_.get(),
                             parameters,
-                            local_parameterizations_,
+                            manifolds_,
                             &finite_diff_residuals,
                             &numeric_jacobians,
                             &local_numeric_jacobians)) {
@@ -205,8 +242,8 @@
     if (!IsClose(results->residuals[i],
                  finite_diff_residuals[i],
                  relative_precision,
-                 NULL,
-                 NULL)) {
+                 nullptr,
+                 nullptr)) {
       results->error_log =
           "Function evaluation with and without Jacobians "
           "resulted in different residuals.";
diff --git a/internal/ceres/gradient_checker_test.cc b/internal/ceres/gradient_checker_test.cc
index 31dc97b..c7d08cb 100644
--- a/internal/ceres/gradient_checker_test.cc
+++ b/internal/ceres/gradient_checker_test.cc
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -217,8 +217,11 @@
 
   // Test that Probe returns true for correct Jacobians.
   GoodTestTerm good_term(num_parameters, parameter_sizes.data());
-  GradientChecker good_gradient_checker(&good_term, NULL, numeric_diff_options);
-  EXPECT_TRUE(good_gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+  std::vector<const Manifold*>* manifolds = nullptr;
+  GradientChecker good_gradient_checker(
+      &good_term, manifolds, numeric_diff_options);
+  EXPECT_TRUE(
+      good_gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
   EXPECT_TRUE(
       good_gradient_checker.Probe(parameters.data(), kTolerance, &results))
       << results.error_log;
@@ -233,7 +236,7 @@
   // Test that if the cost function return false, Probe should return false.
   good_term.SetReturnValue(false);
   EXPECT_FALSE(
-      good_gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+      good_gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
   EXPECT_FALSE(
       good_gradient_checker.Probe(parameters.data(), kTolerance, &results))
       << results.error_log;
@@ -251,8 +254,10 @@
 
   // Test that Probe returns false for incorrect Jacobians.
   BadTestTerm bad_term(num_parameters, parameter_sizes.data());
-  GradientChecker bad_gradient_checker(&bad_term, NULL, numeric_diff_options);
-  EXPECT_FALSE(bad_gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+  GradientChecker bad_gradient_checker(
+      &bad_term, manifolds, numeric_diff_options);
+  EXPECT_FALSE(
+      bad_gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
   EXPECT_FALSE(
       bad_gradient_checker.Probe(parameters.data(), kTolerance, &results));
 
@@ -305,7 +310,7 @@
       residuals += residual_J_param * param;
 
       // Return Jacobian.
-      if (residual_J_params != NULL && residual_J_params[i] != NULL) {
+      if (residual_J_params != nullptr && residual_J_params[i] != nullptr) {
         Eigen::Map<Matrix> residual_J_param_out(residual_J_params[i],
                                                 residual_J_param.rows(),
                                                 residual_J_param.cols());
@@ -340,10 +345,15 @@
   Vector residuals_offset_;
 };
 
-/**
- * Helper local parameterization that multiplies the delta vector by the given
- * jacobian and adds it to the parameter.
- */
+// Helper function to compare two Eigen matrices (used in the test below).
+static void ExpectMatricesClose(Matrix p, Matrix q, double tolerance) {
+  ASSERT_EQ(p.rows(), q.rows());
+  ASSERT_EQ(p.cols(), q.cols());
+  ExpectArraysClose(p.size(), p.data(), q.data(), tolerance);
+}
+
+// Helper local parameterization that multiplies the delta vector by the given
+// jacobian and adds it to the parameter.
 class MatrixParameterization : public LocalParameterization {
  public:
   bool Plus(const double* x,
@@ -366,13 +376,6 @@
   Matrix global_J_local;
 };
 
-// Helper function to compare two Eigen matrices (used in the test below).
-static void ExpectMatricesClose(Matrix p, Matrix q, double tolerance) {
-  ASSERT_EQ(p.rows(), q.rows());
-  ASSERT_EQ(p.cols(), q.cols());
-  ExpectArraysClose(p.size(), p.data(), q.data(), tolerance);
-}
-
 TEST(GradientChecker, TestCorrectnessWithLocalParameterizations) {
   // Create cost function.
   Eigen::Vector3d residual_offset(100.0, 200.0, 300.0);
@@ -417,7 +420,7 @@
   ExpectMatricesClose(j2_out, j1, std::numeric_limits<double>::epsilon());
   ExpectMatricesClose(residual, residual_expected, kTolerance);
 
-  // Create local parameterization.
+  // Create parameterization.
   Eigen::Matrix<double, 3, 2, Eigen::RowMajor> global_J_local;
   global_J_local.row(0) << 1.5, 2.5;
   global_J_local.row(1) << 3.5, 4.5;
@@ -444,7 +447,7 @@
   // Now test GradientChecker.
   std::vector<const LocalParameterization*> parameterizations(2);
   parameterizations[0] = &parameterization;
-  parameterizations[1] = NULL;
+  parameterizations[1] = nullptr;
   NumericDiffOptions numeric_diff_options;
   GradientChecker::ProbeResults results;
   GradientChecker gradient_checker(
@@ -459,15 +462,10 @@
   problem.AddParameterBlock(param0_solver.data(), 3, &parameterization);
   problem.AddParameterBlock(param1_solver.data(), 2);
   problem.AddResidualBlock(
-      &cost_function, NULL, param0_solver.data(), param1_solver.data());
-  Solver::Options solver_options;
-  solver_options.check_gradients = true;
-  solver_options.initial_trust_region_radius = 1e10;
-  Solver solver;
-  Solver::Summary summary;
+      &cost_function, nullptr, param0_solver.data(), param1_solver.data());
 
   // First test case: everything is correct.
-  EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+  EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
   EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
       << results.error_log;
 
@@ -494,6 +492,13 @@
   EXPECT_TRUE(results.error_log.empty());
 
   // Test interaction with the 'check_gradients' option in Solver.
+  Solver::Options solver_options;
+  solver_options.linear_solver_type = DENSE_QR;
+  solver_options.check_gradients = true;
+  solver_options.initial_trust_region_radius = 1e10;
+  Solver solver;
+  Solver::Summary summary;
+
   param0_solver = param0;
   param1_solver = param1;
   solver.Solve(solver_options, &problem, &summary);
@@ -506,7 +511,7 @@
   j0_offset.setZero();
   j0_offset.col(2).setConstant(0.001);
   cost_function.SetJacobianOffset(0, j0_offset);
-  EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+  EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
   EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
       << results.error_log;
 
@@ -540,9 +545,9 @@
   solver.Solve(solver_options, &problem, &summary);
   EXPECT_EQ(FAILURE, summary.termination_type);
 
-  // Now, zero out the local parameterization Jacobian of the 1st parameter
-  // with respect to the 3rd component. This makes the combination of
-  // cost function and local parameterization return correct values again.
+  // Now, zero out the local parameterization Jacobian with respect to the 3rd
+  // component of the 1st parameter. This makes the combination of cost function
+  // and local parameterization return correct values again.
   parameterization.global_J_local.row(2).setZero();
 
   // Verify that the gradient checker does not treat this as an error.
@@ -582,5 +587,249 @@
   EXPECT_LE(summary.final_cost, 1e-12);
 }
 
+// Helper manifold that multiplies the delta vector by the given
+// jacobian and adds it to the parameter.
+class MatrixManifold : public Manifold {
+ public:
+  bool Plus(const double* x,
+            const double* delta,
+            double* x_plus_delta) const final {
+    VectorRef(x_plus_delta, AmbientSize()) =
+        ConstVectorRef(x, AmbientSize()) +
+        (global_J_local * ConstVectorRef(delta, TangentSize()));
+    return true;
+  }
+
+  bool PlusJacobian(const double* /*x*/, double* jacobian) const final {
+    MatrixRef(jacobian, AmbientSize(), TangentSize()) = global_J_local;
+    return true;
+  }
+
+  bool Minus(const double* y, const double* x, double* y_minus_x) const {
+    LOG(FATAL) << "Should not be called";
+    return true;
+  }
+
+  bool MinusJacobian(const double* x, double* jacobian) const final {
+    LOG(FATAL) << "Should not be called";
+    return true;
+  }
+
+  int AmbientSize() const final { return global_J_local.rows(); }
+  int TangentSize() const final { return global_J_local.cols(); }
+
+  Matrix global_J_local;
+};
+
+TEST(GradientChecker, TestCorrectnessWithManifolds) {
+  // Create cost function.
+  Eigen::Vector3d residual_offset(100.0, 200.0, 300.0);
+  LinearCostFunction cost_function(residual_offset);
+  Eigen::Matrix<double, 3, 3, Eigen::RowMajor> j0;
+  j0.row(0) << 1.0, 2.0, 3.0;
+  j0.row(1) << 4.0, 5.0, 6.0;
+  j0.row(2) << 7.0, 8.0, 9.0;
+  Eigen::Matrix<double, 3, 2, Eigen::RowMajor> j1;
+  j1.row(0) << 10.0, 11.0;
+  j1.row(1) << 12.0, 13.0;
+  j1.row(2) << 14.0, 15.0;
+
+  Eigen::Vector3d param0(1.0, 2.0, 3.0);
+  Eigen::Vector2d param1(4.0, 5.0);
+
+  cost_function.AddParameter(j0);
+  cost_function.AddParameter(j1);
+
+  std::vector<int> parameter_sizes(2);
+  parameter_sizes[0] = 3;
+  parameter_sizes[1] = 2;
+  std::vector<int> tangent_sizes(2);
+  tangent_sizes[0] = 2;
+  tangent_sizes[1] = 2;
+
+  // Test cost function for correctness.
+  Eigen::Matrix<double, 3, 3, Eigen::RowMajor> j1_out;
+  Eigen::Matrix<double, 3, 2, Eigen::RowMajor> j2_out;
+  Eigen::Vector3d residual;
+  std::vector<const double*> parameters(2);
+  parameters[0] = param0.data();
+  parameters[1] = param1.data();
+  std::vector<double*> jacobians(2);
+  jacobians[0] = j1_out.data();
+  jacobians[1] = j2_out.data();
+  cost_function.Evaluate(parameters.data(), residual.data(), jacobians.data());
+
+  Matrix residual_expected = residual_offset + j0 * param0 + j1 * param1;
+
+  ExpectMatricesClose(j1_out, j0, std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(j2_out, j1, std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(residual, residual_expected, kTolerance);
+
+  // Create manifold.
+  Eigen::Matrix<double, 3, 2, Eigen::RowMajor> global_J_local;
+  global_J_local.row(0) << 1.5, 2.5;
+  global_J_local.row(1) << 3.5, 4.5;
+  global_J_local.row(2) << 5.5, 6.5;
+
+  MatrixManifold manifold;
+  manifold.global_J_local = global_J_local;
+
+  // Test manifold for correctness.
+  Eigen::Vector3d x(7.0, 8.0, 9.0);
+  Eigen::Vector2d delta(10.0, 11.0);
+
+  Eigen::Matrix<double, 3, 2, Eigen::RowMajor> global_J_local_out;
+  manifold.PlusJacobian(x.data(), global_J_local_out.data());
+  ExpectMatricesClose(global_J_local_out,
+                      global_J_local,
+                      std::numeric_limits<double>::epsilon());
+
+  Eigen::Vector3d x_plus_delta;
+  manifold.Plus(x.data(), delta.data(), x_plus_delta.data());
+  Eigen::Vector3d x_plus_delta_expected = x + (global_J_local * delta);
+  ExpectMatricesClose(x_plus_delta, x_plus_delta_expected, kTolerance);
+
+  // Now test GradientChecker.
+  std::vector<const Manifold*> manifolds(2);
+  manifolds[0] = &manifold;
+  manifolds[1] = nullptr;
+  NumericDiffOptions numeric_diff_options;
+  GradientChecker::ProbeResults results;
+  GradientChecker gradient_checker(
+      &cost_function, &manifolds, numeric_diff_options);
+
+  Problem::Options problem_options;
+  problem_options.cost_function_ownership = DO_NOT_TAKE_OWNERSHIP;
+  problem_options.manifold_ownership = DO_NOT_TAKE_OWNERSHIP;
+  Problem problem(problem_options);
+  Eigen::Vector3d param0_solver;
+  Eigen::Vector2d param1_solver;
+  problem.AddParameterBlock(param0_solver.data(), 3, &manifold);
+  problem.AddParameterBlock(param1_solver.data(), 2);
+  problem.AddResidualBlock(
+      &cost_function, nullptr, param0_solver.data(), param1_solver.data());
+
+  // First test case: everything is correct.
+  EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
+  EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
+      << results.error_log;
+
+  // Check that results contain correct data.
+  ASSERT_EQ(results.return_value, true);
+  ExpectMatricesClose(
+      results.residuals, residual, std::numeric_limits<double>::epsilon());
+  CheckDimensions(results, parameter_sizes, tangent_sizes, 3);
+  ExpectMatricesClose(
+      results.local_jacobians.at(0), j0 * global_J_local, kTolerance);
+  ExpectMatricesClose(results.local_jacobians.at(1),
+                      j1,
+                      std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(
+      results.local_numeric_jacobians.at(0), j0 * global_J_local, kTolerance);
+  ExpectMatricesClose(results.local_numeric_jacobians.at(1), j1, kTolerance);
+  ExpectMatricesClose(
+      results.jacobians.at(0), j0, std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(
+      results.jacobians.at(1), j1, std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(results.numeric_jacobians.at(0), j0, kTolerance);
+  ExpectMatricesClose(results.numeric_jacobians.at(1), j1, kTolerance);
+  EXPECT_GE(results.maximum_relative_error, 0.0);
+  EXPECT_TRUE(results.error_log.empty());
+
+  // Test interaction with the 'check_gradients' option in Solver.
+  Solver::Options solver_options;
+  solver_options.linear_solver_type = DENSE_QR;
+  solver_options.check_gradients = true;
+  solver_options.initial_trust_region_radius = 1e10;
+  Solver solver;
+  Solver::Summary summary;
+
+  param0_solver = param0;
+  param1_solver = param1;
+  solver.Solve(solver_options, &problem, &summary);
+  EXPECT_EQ(CONVERGENCE, summary.termination_type);
+  EXPECT_LE(summary.final_cost, 1e-12);
+
+  // Second test case: Mess up reported derivatives with respect to 3rd
+  // component of 1st parameter. Check should fail.
+  Eigen::Matrix<double, 3, 3, Eigen::RowMajor> j0_offset;
+  j0_offset.setZero();
+  j0_offset.col(2).setConstant(0.001);
+  cost_function.SetJacobianOffset(0, j0_offset);
+  EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
+  EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
+      << results.error_log;
+
+  // Check that results contain correct data.
+  ASSERT_EQ(results.return_value, true);
+  ExpectMatricesClose(
+      results.residuals, residual, std::numeric_limits<double>::epsilon());
+  CheckDimensions(results, parameter_sizes, tangent_sizes, 3);
+  ASSERT_EQ(results.local_jacobians.size(), 2);
+  ASSERT_EQ(results.local_numeric_jacobians.size(), 2);
+  ExpectMatricesClose(results.local_jacobians.at(0),
+                      (j0 + j0_offset) * global_J_local,
+                      kTolerance);
+  ExpectMatricesClose(results.local_jacobians.at(1),
+                      j1,
+                      std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(
+      results.local_numeric_jacobians.at(0), j0 * global_J_local, kTolerance);
+  ExpectMatricesClose(results.local_numeric_jacobians.at(1), j1, kTolerance);
+  ExpectMatricesClose(results.jacobians.at(0), j0 + j0_offset, kTolerance);
+  ExpectMatricesClose(
+      results.jacobians.at(1), j1, std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(results.numeric_jacobians.at(0), j0, kTolerance);
+  ExpectMatricesClose(results.numeric_jacobians.at(1), j1, kTolerance);
+  EXPECT_GT(results.maximum_relative_error, 0.0);
+  EXPECT_FALSE(results.error_log.empty());
+
+  // Test interaction with the 'check_gradients' option in Solver.
+  param0_solver = param0;
+  param1_solver = param1;
+  solver.Solve(solver_options, &problem, &summary);
+  EXPECT_EQ(FAILURE, summary.termination_type);
+
+  // Now, zero out the manifold Jacobian with respect to the 3rd component of
+  // the 1st parameter. This makes the combination of cost function and manifold
+  // return correct values again.
+  manifold.global_J_local.row(2).setZero();
+
+  // Verify that the gradient checker does not treat this as an error.
+  EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
+      << results.error_log;
+
+  // Check that results contain correct data.
+  ASSERT_EQ(results.return_value, true);
+  ExpectMatricesClose(
+      results.residuals, residual, std::numeric_limits<double>::epsilon());
+  CheckDimensions(results, parameter_sizes, tangent_sizes, 3);
+  ASSERT_EQ(results.local_jacobians.size(), 2);
+  ASSERT_EQ(results.local_numeric_jacobians.size(), 2);
+  ExpectMatricesClose(results.local_jacobians.at(0),
+                      (j0 + j0_offset) * manifold.global_J_local,
+                      kTolerance);
+  ExpectMatricesClose(results.local_jacobians.at(1),
+                      j1,
+                      std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(results.local_numeric_jacobians.at(0),
+                      j0 * manifold.global_J_local,
+                      kTolerance);
+  ExpectMatricesClose(results.local_numeric_jacobians.at(1), j1, kTolerance);
+  ExpectMatricesClose(results.jacobians.at(0), j0 + j0_offset, kTolerance);
+  ExpectMatricesClose(
+      results.jacobians.at(1), j1, std::numeric_limits<double>::epsilon());
+  ExpectMatricesClose(results.numeric_jacobians.at(0), j0, kTolerance);
+  ExpectMatricesClose(results.numeric_jacobians.at(1), j1, kTolerance);
+  EXPECT_GE(results.maximum_relative_error, 0.0);
+  EXPECT_TRUE(results.error_log.empty());
+
+  // Test interaction with the 'check_gradients' option in Solver.
+  param0_solver = param0;
+  param1_solver = param1;
+  solver.Solve(solver_options, &problem, &summary);
+  EXPECT_EQ(CONVERGENCE, summary.termination_type);
+  EXPECT_LE(summary.final_cost, 1e-12);
+}
 }  // namespace internal
 }  // namespace ceres
diff --git a/internal/ceres/gradient_checking_cost_function.cc b/internal/ceres/gradient_checking_cost_function.cc
index 2eb6d62..4bd5bcc 100644
--- a/internal/ceres/gradient_checking_cost_function.cc
+++ b/internal/ceres/gradient_checking_cost_function.cc
@@ -62,15 +62,14 @@
 
 class GradientCheckingCostFunction : public CostFunction {
  public:
-  GradientCheckingCostFunction(
-      const CostFunction* function,
-      const std::vector<const LocalParameterization*>* local_parameterizations,
-      const NumericDiffOptions& options,
-      double relative_precision,
-      const string& extra_info,
-      GradientCheckingIterationCallback* callback)
+  GradientCheckingCostFunction(const CostFunction* function,
+                               const std::vector<const Manifold*>* manifolds,
+                               const NumericDiffOptions& options,
+                               double relative_precision,
+                               const string& extra_info,
+                               GradientCheckingIterationCallback* callback)
       : function_(function),
-        gradient_checker_(function, local_parameterizations, options),
+        gradient_checker_(function, manifolds, options),
         relative_precision_(relative_precision),
         extra_info_(extra_info),
         callback_(callback) {
@@ -154,7 +153,7 @@
 
 CostFunction* CreateGradientCheckingCostFunction(
     const CostFunction* cost_function,
-    const std::vector<const LocalParameterization*>* local_parameterizations,
+    const std::vector<const Manifold*>* manifolds,
     double relative_step_size,
     double relative_precision,
     const std::string& extra_info,
@@ -163,7 +162,7 @@
   numeric_diff_options.relative_step_size = relative_step_size;
 
   return new GradientCheckingCostFunction(cost_function,
-                                          local_parameterizations,
+                                          manifolds,
                                           numeric_diff_options,
                                           relative_precision,
                                           extra_info,
@@ -176,18 +175,16 @@
     double relative_precision,
     GradientCheckingIterationCallback* callback) {
   CHECK(callback != nullptr);
-  // We create new CostFunctions by wrapping the original CostFunction
-  // in a gradient checking CostFunction. So its okay for the
-  // ProblemImpl to take ownership of it and destroy it. The
-  // LossFunctions and LocalParameterizations are reused and since
-  // they are owned by problem_impl, gradient_checking_problem_impl
+  // We create new CostFunctions by wrapping the original CostFunction in a
+  // gradient checking CostFunction. So its okay for the ProblemImpl to take
+  // ownership of it and destroy it. The LossFunctions and Manifolds are reused
+  // and since they are owned by problem_impl, gradient_checking_problem_impl
   // should not take ownership of it.
   Problem::Options gradient_checking_problem_options;
   gradient_checking_problem_options.cost_function_ownership = TAKE_OWNERSHIP;
   gradient_checking_problem_options.loss_function_ownership =
       DO_NOT_TAKE_OWNERSHIP;
-  gradient_checking_problem_options.local_parameterization_ownership =
-      DO_NOT_TAKE_OWNERSHIP;
+  gradient_checking_problem_options.manifold_ownership = DO_NOT_TAKE_OWNERSHIP;
   gradient_checking_problem_options.context = problem_impl->context();
 
   NumericDiffOptions numeric_diff_options;
@@ -198,15 +195,15 @@
 
   Program* program = problem_impl->mutable_program();
 
-  // For every ParameterBlock in problem_impl, create a new parameter
-  // block with the same local parameterization and constancy.
+  // For every ParameterBlock in problem_impl, create a new parameter block with
+  // the same manifold and constancy.
   const vector<ParameterBlock*>& parameter_blocks = program->parameter_blocks();
   for (int i = 0; i < parameter_blocks.size(); ++i) {
     ParameterBlock* parameter_block = parameter_blocks[i];
     gradient_checking_problem_impl->AddParameterBlock(
         parameter_block->mutable_user_state(),
         parameter_block->Size(),
-        parameter_block->mutable_local_parameterization());
+        parameter_block->mutable_manifold());
 
     if (parameter_block->IsConstant()) {
       gradient_checking_problem_impl->SetParameterBlockConstant(
@@ -238,22 +235,22 @@
     string extra_info =
         StringPrintf("Residual block id %d; depends on parameters [", i);
     vector<double*> parameter_blocks;
-    vector<const LocalParameterization*> local_parameterizations;
+    vector<const Manifold*> manifolds;
     parameter_blocks.reserve(residual_block->NumParameterBlocks());
-    local_parameterizations.reserve(residual_block->NumParameterBlocks());
+    manifolds.reserve(residual_block->NumParameterBlocks());
     for (int j = 0; j < residual_block->NumParameterBlocks(); ++j) {
       ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
       parameter_blocks.push_back(parameter_block->mutable_user_state());
       StringAppendF(&extra_info, "%p", parameter_block->mutable_user_state());
       extra_info += (j < residual_block->NumParameterBlocks() - 1) ? ", " : "]";
-      local_parameterizations.push_back(problem_impl->GetParameterization(
-          parameter_block->mutable_user_state()));
+      manifolds.push_back(
+          problem_impl->GetManifold(parameter_block->mutable_user_state()));
     }
 
     // Wrap the original CostFunction in a GradientCheckingCostFunction.
     CostFunction* gradient_checking_cost_function =
         new GradientCheckingCostFunction(residual_block->cost_function(),
-                                         &local_parameterizations,
+                                         &manifolds,
                                          numeric_diff_options,
                                          relative_precision,
                                          extra_info,
diff --git a/internal/ceres/gradient_checking_cost_function.h b/internal/ceres/gradient_checking_cost_function.h
index ea6e9b3..46cf4dd 100644
--- a/internal/ceres/gradient_checking_cost_function.h
+++ b/internal/ceres/gradient_checking_cost_function.h
@@ -38,7 +38,7 @@
 #include "ceres/cost_function.h"
 #include "ceres/internal/port.h"
 #include "ceres/iteration_callback.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 
 namespace ceres {
 namespace internal {
@@ -75,7 +75,7 @@
 // implementation directly.
 CERES_EXPORT_INTERNAL CostFunction* CreateGradientCheckingCostFunction(
     const CostFunction* cost_function,
-    const std::vector<const LocalParameterization*>* local_parameterizations,
+    const std::vector<const Manifold*>* manifolds,
     double relative_step_size,
     double relative_precision,
     const std::string& extra_info,
diff --git a/internal/ceres/gradient_checking_cost_function_test.cc b/internal/ceres/gradient_checking_cost_function_test.cc
index 9ca51f8..8c47c35 100644
--- a/internal/ceres/gradient_checking_cost_function_test.cc
+++ b/internal/ceres/gradient_checking_cost_function_test.cc
@@ -38,6 +38,7 @@
 #include "ceres/cost_function.h"
 #include "ceres/local_parameterization.h"
 #include "ceres/loss_function.h"
+#include "ceres/manifold.h"
 #include "ceres/parameter_block.h"
 #include "ceres/problem_impl.h"
 #include "ceres/program.h"
@@ -324,7 +325,7 @@
 };
 
 // Verify that the two ParameterBlocks are formed from the same user
-// array and have the same LocalParameterization object.
+// array and have the same Manifold objects.
 static void ParameterBlocksAreEquivalent(const ParameterBlock* left,
                                          const ParameterBlock* right) {
   CHECK(left != nullptr);
@@ -332,11 +333,90 @@
   EXPECT_EQ(left->user_state(), right->user_state());
   EXPECT_EQ(left->Size(), right->Size());
   EXPECT_EQ(left->Size(), right->Size());
-  EXPECT_EQ(left->LocalSize(), right->LocalSize());
-  EXPECT_EQ(left->local_parameterization(), right->local_parameterization());
+  EXPECT_EQ(left->TangentSize(), right->TangentSize());
+  EXPECT_EQ(left->manifold(), right->manifold());
   EXPECT_EQ(left->IsConstant(), right->IsConstant());
 }
 
+TEST(GradientCheckingProblemImpl,
+     ProblemDimensionsMatchUsingLocalParameterization) {
+  // Parameter blocks with arbitrarily chosen initial values.
+  double x[] = {1.0, 2.0, 3.0};
+  double y[] = {4.0, 5.0, 6.0, 7.0};
+  double z[] = {8.0, 9.0, 10.0, 11.0, 12.0};
+  double w[] = {13.0, 14.0, 15.0, 16.0};
+
+  ProblemImpl problem_impl;
+  problem_impl.AddParameterBlock(x, 3);
+  problem_impl.AddParameterBlock(y, 4);
+  problem_impl.SetParameterBlockConstant(y);
+  problem_impl.AddParameterBlock(z, 5);
+  problem_impl.AddParameterBlock(w, 4, new QuaternionParameterization);
+  // clang-format off
+  problem_impl.AddResidualBlock(new UnaryCostFunction(2, 3),
+                                NULL, x);
+  problem_impl.AddResidualBlock(new BinaryCostFunction(6, 5, 4),
+                                NULL, z, y);
+  problem_impl.AddResidualBlock(new BinaryCostFunction(3, 3, 5),
+                                new TrivialLoss, x, z);
+  problem_impl.AddResidualBlock(new BinaryCostFunction(7, 5, 3),
+                                NULL, z, x);
+  problem_impl.AddResidualBlock(new TernaryCostFunction(1, 5, 3, 4),
+                                NULL, z, x, y);
+  // clang-format on
+
+  GradientCheckingIterationCallback callback;
+  std::unique_ptr<ProblemImpl> gradient_checking_problem_impl(
+      CreateGradientCheckingProblemImpl(&problem_impl, 1.0, 1.0, &callback));
+
+  // The dimensions of the two problems match.
+  EXPECT_EQ(problem_impl.NumParameterBlocks(),
+            gradient_checking_problem_impl->NumParameterBlocks());
+  EXPECT_EQ(problem_impl.NumResidualBlocks(),
+            gradient_checking_problem_impl->NumResidualBlocks());
+
+  EXPECT_EQ(problem_impl.NumParameters(),
+            gradient_checking_problem_impl->NumParameters());
+  EXPECT_EQ(problem_impl.NumResiduals(),
+            gradient_checking_problem_impl->NumResiduals());
+
+  const Program& program = problem_impl.program();
+  const Program& gradient_checking_program =
+      gradient_checking_problem_impl->program();
+
+  // Since we added the ParameterBlocks and ResidualBlocks explicitly,
+  // they should be in the same order in the two programs. It is
+  // possible that may change due to implementation changes to
+  // Program. This is not expected to be the case and writing code to
+  // anticipate that possibility not worth the extra complexity in
+  // this test.
+  for (int i = 0; i < program.parameter_blocks().size(); ++i) {
+    ParameterBlocksAreEquivalent(
+        program.parameter_blocks()[i],
+        gradient_checking_program.parameter_blocks()[i]);
+  }
+
+  for (int i = 0; i < program.residual_blocks().size(); ++i) {
+    // Compare the sizes of the two ResidualBlocks.
+    const ResidualBlock* original_residual_block = program.residual_blocks()[i];
+    const ResidualBlock* new_residual_block =
+        gradient_checking_program.residual_blocks()[i];
+    EXPECT_EQ(original_residual_block->NumParameterBlocks(),
+              new_residual_block->NumParameterBlocks());
+    EXPECT_EQ(original_residual_block->NumResiduals(),
+              new_residual_block->NumResiduals());
+    EXPECT_EQ(original_residual_block->NumScratchDoublesForEvaluate(),
+              new_residual_block->NumScratchDoublesForEvaluate());
+
+    // Verify that the ParameterBlocks for the two residuals are equivalent.
+    for (int j = 0; j < original_residual_block->NumParameterBlocks(); ++j) {
+      ParameterBlocksAreEquivalent(
+          original_residual_block->parameter_blocks()[j],
+          new_residual_block->parameter_blocks()[j]);
+    }
+  }
+}
+
 TEST(GradientCheckingProblemImpl, ProblemDimensionsMatch) {
   // Parameter blocks with arbitrarily chosen initial values.
   double x[] = {1.0, 2.0, 3.0};
@@ -349,7 +429,7 @@
   problem_impl.AddParameterBlock(y, 4);
   problem_impl.SetParameterBlockConstant(y);
   problem_impl.AddParameterBlock(z, 5);
-  problem_impl.AddParameterBlock(w, 4, new QuaternionParameterization);
+  problem_impl.AddParameterBlock(w, 4, new Quaternion);
   // clang-format off
   problem_impl.AddResidualBlock(new UnaryCostFunction(2, 3),
                                 NULL, x);
diff --git a/internal/ceres/gradient_problem.cc b/internal/ceres/gradient_problem.cc
index ba33fbc..dc8ac72 100644
--- a/internal/ceres/gradient_problem.cc
+++ b/internal/ceres/gradient_problem.cc
@@ -31,48 +31,57 @@
 #include "ceres/gradient_problem.h"
 
 #include "ceres/local_parameterization.h"
+#include "ceres/manifold_adapter.h"
 #include "glog/logging.h"
 
 namespace ceres {
 
 GradientProblem::GradientProblem(FirstOrderFunction* function)
-    : function_(function),
-      parameterization_(
-          new IdentityParameterization(function_->NumParameters())),
+    : function_(CHECK_NOTNULL(function)),
+      manifold_(new EuclideanManifold(function_->NumParameters())),
       scratch_(new double[function_->NumParameters()]) {}
 
 GradientProblem::GradientProblem(FirstOrderFunction* function,
                                  LocalParameterization* parameterization)
-    : function_(function),
-      parameterization_(parameterization),
+    : function_(CHECK_NOTNULL(function)),
+      parameterization_(CHECK_NOTNULL(parameterization)),
+      manifold_(new internal::ManifoldAdapter(parameterization_.get())),
       scratch_(new double[function_->NumParameters()]) {
-  CHECK_EQ(function_->NumParameters(), parameterization_->GlobalSize());
+  CHECK_EQ(function_->NumParameters(), manifold_->AmbientSize());
+}
+
+GradientProblem::GradientProblem(FirstOrderFunction* function,
+                                 Manifold* manifold)
+    : function_(CHECK_NOTNULL(function)),
+      manifold_(CHECK_NOTNULL(manifold)),
+      scratch_(new double[function_->NumParameters()]) {
+  CHECK_EQ(function_->NumParameters(), manifold_->AmbientSize());
 }
 
 int GradientProblem::NumParameters() const {
   return function_->NumParameters();
 }
 
-int GradientProblem::NumLocalParameters() const {
-  return parameterization_->LocalSize();
+int GradientProblem::NumTangentParameters() const {
+  return manifold_->TangentSize();
 }
 
 bool GradientProblem::Evaluate(const double* parameters,
                                double* cost,
                                double* gradient) const {
-  if (gradient == NULL) {
-    return function_->Evaluate(parameters, cost, NULL);
+  if (gradient == nullptr) {
+    return function_->Evaluate(parameters, cost, nullptr);
   }
 
   return (function_->Evaluate(parameters, cost, scratch_.get()) &&
-          parameterization_->MultiplyByJacobian(
+          manifold_->RightMultiplyByPlusJacobian(
               parameters, 1, scratch_.get(), gradient));
 }
 
 bool GradientProblem::Plus(const double* x,
                            const double* delta,
                            double* x_plus_delta) const {
-  return parameterization_->Plus(x, delta, x_plus_delta);
+  return manifold_->Plus(x, delta, x_plus_delta);
 }
 
 }  // namespace ceres
diff --git a/internal/ceres/gradient_problem_test.cc b/internal/ceres/gradient_problem_test.cc
index 8934138..7a279a3 100644
--- a/internal/ceres/gradient_problem_test.cc
+++ b/internal/ceres/gradient_problem_test.cc
@@ -105,5 +105,33 @@
   EXPECT_EQ(2.0 * x, gradient);
 }
 
+TEST(GradientProblem, EvalutaionWithManifoldAndNoGradient) {
+  ceres::GradientProblem problem(new QuadraticTestFunction(),
+                                 new EuclideanManifold(1));
+  double x = 7.0;
+  double cost = 0;
+  problem.Evaluate(&x, &cost, NULL);
+  EXPECT_EQ(x * x, cost);
+}
+
+TEST(GradientProblem, EvaluationWithoutManifoldAndWithGradient) {
+  ceres::GradientProblem problem(new QuadraticTestFunction());
+  double x = 7.0;
+  double cost = 0;
+  double gradient = 0;
+  problem.Evaluate(&x, &cost, &gradient);
+  EXPECT_EQ(2.0 * x, gradient);
+}
+
+TEST(GradientProblem, EvaluationWithManifoldAndWithGradient) {
+  ceres::GradientProblem problem(new QuadraticTestFunction(),
+                                 new EuclideanManifold(1));
+  double x = 7.0;
+  double cost = 0;
+  double gradient = 0;
+  problem.Evaluate(&x, &cost, &gradient);
+  EXPECT_EQ(2.0 * x, gradient);
+}
+
 }  // namespace internal
 }  // namespace ceres
diff --git a/internal/ceres/manifold_adapter.h b/internal/ceres/manifold_adapter.h
new file mode 100644
index 0000000..49d7a0a
--- /dev/null
+++ b/internal/ceres/manifold_adapter.h
@@ -0,0 +1,58 @@
+#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
+#include "glog/logging.h"
+
+namespace ceres {
+namespace internal {
+
+// Adapter to wrap LocalParameterization and make them look like Manifolds.
+//
+// ManifoldAdapter NEVER takes ownership of local_parameterization.
+class ManifoldAdapter : public Manifold {
+ public:
+  ManifoldAdapter(const LocalParameterization* local_parameterization)
+      : local_parameterization_(CHECK_NOTNULL(local_parameterization)) {}
+  virtual ~ManifoldAdapter() {}
+
+  bool Plus(const double* x,
+            const double* delta,
+            double* x_plus_delta) const override {
+    return local_parameterization_->Plus(x, delta, x_plus_delta);
+  }
+
+  bool PlusJacobian(const double* x, double* jacobian) const override {
+    return local_parameterization_->ComputeJacobian(x, jacobian);
+  }
+
+  bool RightMultiplyByPlusJacobian(const double* x,
+                                   const int num_rows,
+                                   const double* ambient_matrix,
+                                   double* tangent_matrix) const override {
+    return local_parameterization_->MultiplyByJacobian(
+        x, num_rows, ambient_matrix, tangent_matrix);
+  }
+
+  bool Minus(const double* y, const double* x, double* delta) const override {
+    LOG(FATAL) << "This should never be called.";
+    return false;
+  }
+
+  bool MinusJacobian(const double* x, double* jacobian) const override {
+    LOG(FATAL) << "This should never be called.";
+    return false;
+  }
+
+  int AmbientSize() const override {
+    return local_parameterization_->GlobalSize();
+  }
+
+  int TangentSize() const override {
+    return local_parameterization_->LocalSize();
+  }
+
+ private:
+  const LocalParameterization* local_parameterization_;
+};
+
+}  // namespace internal
+}  // namespace ceres
diff --git a/internal/ceres/parameter_block.h b/internal/ceres/parameter_block.h
index 88943df..824745c 100644
--- a/internal/ceres/parameter_block.h
+++ b/internal/ceres/parameter_block.h
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -42,7 +42,7 @@
 #include "ceres/array_utils.h"
 #include "ceres/internal/eigen.h"
 #include "ceres/internal/port.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 #include "ceres/stringprintf.h"
 #include "glog/logging.h"
 
@@ -58,9 +58,9 @@
 // methods are performance sensitive.
 //
 // The class is not thread-safe, unless only const methods are called. The
-// parameter block may also hold a pointer to a local parameterization; the
-// parameter block does not take ownership of this pointer, so the user is
-// responsible for the proper disposal of the local parameterization.
+// parameter block may also hold a pointer to a manifold; the parameter block
+// does not take ownership of this pointer, so the user is responsible for the
+// proper disposal of the manifold.
 class ParameterBlock {
  public:
   typedef std::unordered_set<ResidualBlock*> ResidualBlockSet;
@@ -74,16 +74,13 @@
         state_(user_state),
         index_(index) {}
 
-  ParameterBlock(double* user_state,
-                 int size,
-                 int index,
-                 LocalParameterization* local_parameterization)
+  ParameterBlock(double* user_state, int size, int index, Manifold* manifold)
       : user_state_(user_state),
         size_(size),
         state_(user_state),
         index_(index) {
-    if (local_parameterization != nullptr) {
-      SetParameterization(local_parameterization);
+    if (manifold != nullptr) {
+      SetManifold(manifold);
     }
   }
 
@@ -98,7 +95,7 @@
                          << "with user location " << user_state_;
 
     state_ = x;
-    return UpdateLocalParameterizationJacobian();
+    return UpdatePlusJacobian();
   }
 
   // Copy the current parameter state out to x. This is "GetState()" rather than
@@ -114,17 +111,13 @@
   const double* state() const { return state_; }
   const double* user_state() const { return user_state_; }
   double* mutable_user_state() { return user_state_; }
-  const LocalParameterization* local_parameterization() const {
-    return local_parameterization_;
-  }
-  LocalParameterization* mutable_local_parameterization() {
-    return local_parameterization_;
-  }
+  const Manifold* manifold() const { return manifold_; }
+  Manifold* mutable_manifold() { return manifold_; }
 
   // Set this parameter block to vary or not.
   void SetConstant() { is_set_constant_ = true; }
   void SetVarying() { is_set_constant_ = false; }
-  bool IsConstant() const { return (is_set_constant_ || LocalSize() == 0); }
+  bool IsConstant() const { return (is_set_constant_ || TangentSize() == 0); }
 
   double UpperBound(int index) const {
     return (upper_bounds_ ? upper_bounds_[index]
@@ -151,51 +144,46 @@
   int delta_offset() const { return delta_offset_; }
   void set_delta_offset(int delta_offset) { delta_offset_ = delta_offset; }
 
-  // Methods relating to the parameter block's parameterization.
+  // Methods relating to the parameter block's manifold.
 
-  // The local to global jacobian. Returns nullptr if there is no local
-  // parameterization for this parameter block. The returned matrix is row-major
-  // and has Size() rows and  LocalSize() columns.
-  const double* LocalParameterizationJacobian() const {
-    return local_parameterization_jacobian_.get();
+  // The local to global jacobian. Returns nullptr if there is no manifold for
+  // this parameter block. The returned matrix is row-major and has Size() rows
+  // and TangentSize() columns.
+  const double* PlusJacobian() const { return plus_jacobian_.get(); }
+
+  int TangentSize() const {
+    return (manifold_ == nullptr) ? size_ : manifold_->TangentSize();
   }
 
-  int LocalSize() const {
-    return (local_parameterization_ == nullptr)
-               ? size_
-               : local_parameterization_->LocalSize();
-  }
-
-  // Set the parameterization. The parameter block does not take
-  // ownership of the parameterization.
-  void SetParameterization(LocalParameterization* new_parameterization) {
-    // Nothing to do if the new parameterization is the same as the
-    // old parameterization.
-    if (new_parameterization == local_parameterization_) {
+  // Set the manifold. The parameter block does not take ownership of
+  // the manifold.
+  void SetManifold(Manifold* new_manifold) {
+    // Nothing to do if the new manifold is the same as the old
+    // manifold.
+    if (new_manifold == manifold_) {
       return;
     }
 
-    if (new_parameterization == nullptr) {
-      local_parameterization_ = nullptr;
+    if (new_manifold == nullptr) {
+      manifold_ = nullptr;
+      plus_jacobian_.reset(nullptr);
       return;
     }
 
-    CHECK(new_parameterization->GlobalSize() == size_)
-        << "Invalid parameterization for parameter block. The parameter block "
-        << "has size " << size_ << " while the parameterization has a global "
-        << "size of " << new_parameterization->GlobalSize() << ". Did you "
-        << "accidentally use the wrong parameter block or parameterization?";
+    CHECK_EQ(new_manifold->AmbientSize(), size_)
+        << "The parameter block has size = " << size_
+        << " while the manifold has ambient size = "
+        << new_manifold->AmbientSize();
 
-    CHECK_GE(new_parameterization->LocalSize(), 0)
-        << "Invalid parameterization. Parameterizations must have a "
+    CHECK_GE(new_manifold->TangentSize(), 0)
+        << "Invalid Manifold. Manifolds must have a "
         << "non-negative dimensional tangent space.";
 
-    local_parameterization_ = new_parameterization;
-    local_parameterization_jacobian_.reset(
-        new double[local_parameterization_->GlobalSize() *
-                   local_parameterization_->LocalSize()]);
-    CHECK(UpdateLocalParameterizationJacobian())
-        << "Local parameterization Jacobian computation failed for x: "
+    manifold_ = new_manifold;
+    plus_jacobian_.reset(
+        new double[manifold_->AmbientSize() * manifold_->TangentSize()]);
+    CHECK(UpdatePlusJacobian())
+        << "Manifold::PlusJacobian computation failed for x: "
         << ConstVectorRef(state_, Size()).transpose();
   }
 
@@ -234,11 +222,11 @@
   }
 
   // Generalization of the addition operation. This is the same as
-  // LocalParameterization::Plus() followed by projection onto the
+  // Manifold::Plus() followed by projection onto the
   // hyper cube implied by the bounds constraints.
   bool Plus(const double* x, const double* delta, double* x_plus_delta) {
-    if (local_parameterization_ != nullptr) {
-      if (!local_parameterization_->Plus(x, delta, x_plus_delta)) {
+    if (manifold_ != nullptr) {
+      if (!manifold_->Plus(x, delta, x_plus_delta)) {
         return false;
       }
     } else {
@@ -321,33 +309,30 @@
   }
 
  private:
-  bool UpdateLocalParameterizationJacobian() {
-    if (local_parameterization_ == nullptr) {
+  bool UpdatePlusJacobian() {
+    if (manifold_ == nullptr) {
       return true;
     }
 
-    // Update the local to global Jacobian. In some cases this is
+    // Update the Plus Jacobian. In some cases this is
     // wasted effort; if this is a bottleneck, we will find a solution
     // at that time.
-
-    const int jacobian_size = Size() * LocalSize();
-    InvalidateArray(jacobian_size, local_parameterization_jacobian_.get());
-    if (!local_parameterization_->ComputeJacobian(
-            state_, local_parameterization_jacobian_.get())) {
-      LOG(WARNING) << "Local parameterization Jacobian computation failed"
+    const int jacobian_size = Size() * TangentSize();
+    InvalidateArray(jacobian_size, plus_jacobian_.get());
+    if (!manifold_->PlusJacobian(state_, plus_jacobian_.get())) {
+      LOG(WARNING) << "Manifold::PlusJacobian computation failed"
                       "for x: "
                    << ConstVectorRef(state_, Size()).transpose();
       return false;
     }
 
-    if (!IsArrayValid(jacobian_size, local_parameterization_jacobian_.get())) {
-      LOG(WARNING) << "Local parameterization Jacobian computation returned"
+    if (!IsArrayValid(jacobian_size, plus_jacobian_.get())) {
+      LOG(WARNING) << "Manifold::PlusJacobian computation returned "
                    << "an invalid matrix for x: "
                    << ConstVectorRef(state_, Size()).transpose()
                    << "\n Jacobian matrix : "
-                   << ConstMatrixRef(local_parameterization_jacobian_.get(),
-                                     Size(),
-                                     LocalSize());
+                   << ConstMatrixRef(
+                          plus_jacobian_.get(), Size(), TangentSize());
       return false;
     }
     return true;
@@ -356,14 +341,14 @@
   double* user_state_ = nullptr;
   int size_ = -1;
   bool is_set_constant_ = false;
-  LocalParameterization* local_parameterization_ = nullptr;
+  Manifold* manifold_ = nullptr;
 
   // The "state" of the parameter. These fields are only needed while the
   // solver is running. While at first glance using mutable is a bad idea, this
   // ends up simplifying the internals of Ceres enough to justify the potential
   // pitfalls of using "mutable."
   mutable const double* state_ = nullptr;
-  mutable std::unique_ptr<double[]> local_parameterization_jacobian_;
+  mutable std::unique_ptr<double[]> plus_jacobian_;
 
   // The index of the parameter. This is used by various other parts of Ceres to
   // permit switching from a ParameterBlock* to an index in another array.
@@ -392,7 +377,7 @@
   std::unique_ptr<double[]> upper_bounds_;
   std::unique_ptr<double[]> lower_bounds_;
 
-  // Necessary so ProblemImpl can clean up the parameterizations.
+  // Necessary so ProblemImpl can clean up the manifolds.
   friend class ProblemImpl;
 };
 
diff --git a/internal/ceres/parameter_block_test.cc b/internal/ceres/parameter_block_test.cc
index a5a4230..713a0f7 100644
--- a/internal/ceres/parameter_block_test.cc
+++ b/internal/ceres/parameter_block_test.cc
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -36,70 +36,69 @@
 namespace ceres {
 namespace internal {
 
-TEST(ParameterBlock, SetParameterizationDiesOnSizeMismatch) {
+TEST(ParameterBlock, SetManifoldDiesOnSizeMismatch) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
   std::vector<int> indices;
   indices.push_back(1);
-  SubsetParameterization subset_wrong_size(4, indices);
-  EXPECT_DEATH_IF_SUPPORTED(
-      parameter_block.SetParameterization(&subset_wrong_size), "global");
+  SubsetManifold subset_wrong_size(4, indices);
+  EXPECT_DEATH_IF_SUPPORTED(parameter_block.SetManifold(&subset_wrong_size),
+                            "ambient");
 }
 
-TEST(ParameterBlock, SetParameterizationWithSameExistingParameterization) {
+TEST(ParameterBlock, SetManifoldWithSameExistingManifold) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
   std::vector<int> indices;
   indices.push_back(1);
-  SubsetParameterization subset(3, indices);
-  parameter_block.SetParameterization(&subset);
-  parameter_block.SetParameterization(&subset);
+  SubsetManifold subset(3, indices);
+  parameter_block.SetManifold(&subset);
+  parameter_block.SetManifold(&subset);
 }
 
-TEST(ParameterBlock, SetParameterizationAllowsResettingToNull) {
+TEST(ParameterBlock, SetManifoldAllowsResettingToNull) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
   std::vector<int> indices;
   indices.push_back(1);
-  SubsetParameterization subset(3, indices);
-  parameter_block.SetParameterization(&subset);
-  EXPECT_EQ(parameter_block.local_parameterization(), &subset);
-  parameter_block.SetParameterization(nullptr);
-  EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+  SubsetManifold subset(3, indices);
+  parameter_block.SetManifold(&subset);
+  EXPECT_EQ(parameter_block.manifold(), &subset);
+  parameter_block.SetManifold(nullptr);
+  EXPECT_EQ(parameter_block.manifold(), nullptr);
+  EXPECT_EQ(parameter_block.PlusJacobian(), nullptr);
 }
 
-TEST(ParameterBlock,
-     SetParameterizationAllowsResettingToDifferentParameterization) {
+TEST(ParameterBlock, SetManifoldAllowsResettingToDifferentManifold) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
   std::vector<int> indices;
   indices.push_back(1);
-  SubsetParameterization subset(3, indices);
-  parameter_block.SetParameterization(&subset);
-  EXPECT_EQ(parameter_block.local_parameterization(), &subset);
+  SubsetManifold subset(3, indices);
+  parameter_block.SetManifold(&subset);
+  EXPECT_EQ(parameter_block.manifold(), &subset);
 
-  SubsetParameterization subset_different(3, indices);
-  parameter_block.SetParameterization(&subset_different);
-  EXPECT_EQ(parameter_block.local_parameterization(), &subset_different);
+  SubsetManifold subset_different(3, indices);
+  parameter_block.SetManifold(&subset_different);
+  EXPECT_EQ(parameter_block.manifold(), &subset_different);
 }
 
-TEST(ParameterBlock, SetParameterizationAndNormalOperation) {
+TEST(ParameterBlock, SetManifoldAndNormalOperation) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
   std::vector<int> indices;
   indices.push_back(1);
-  SubsetParameterization subset(3, indices);
-  parameter_block.SetParameterization(&subset);
+  SubsetManifold subset(3, indices);
+  parameter_block.SetManifold(&subset);
 
-  // Ensure the local parameterization jacobian result is correctly computed.
-  ConstMatrixRef local_parameterization_jacobian(
-      parameter_block.LocalParameterizationJacobian(), 3, 2);
-  ASSERT_EQ(1.0, local_parameterization_jacobian(0, 0));
-  ASSERT_EQ(0.0, local_parameterization_jacobian(0, 1));
-  ASSERT_EQ(0.0, local_parameterization_jacobian(1, 0));
-  ASSERT_EQ(0.0, local_parameterization_jacobian(1, 1));
-  ASSERT_EQ(0.0, local_parameterization_jacobian(2, 0));
-  ASSERT_EQ(1.0, local_parameterization_jacobian(2, 1));
+  // Ensure the manifold plus jacobian result is correctly computed.
+  ConstMatrixRef manifold_jacobian(parameter_block.PlusJacobian(), 3, 2);
+  ASSERT_EQ(1.0, manifold_jacobian(0, 0));
+  ASSERT_EQ(0.0, manifold_jacobian(0, 1));
+  ASSERT_EQ(0.0, manifold_jacobian(1, 0));
+  ASSERT_EQ(0.0, manifold_jacobian(1, 1));
+  ASSERT_EQ(0.0, manifold_jacobian(2, 0));
+  ASSERT_EQ(1.0, manifold_jacobian(2, 1));
 
   // Check that updating works as expected.
   double x_plus_delta[3];
@@ -110,37 +109,48 @@
   ASSERT_EQ(3.3, x_plus_delta[2]);
 }
 
-struct TestParameterization : public LocalParameterization {
+struct TestManifold : public Manifold {
  public:
-  virtual ~TestParameterization() {}
+  virtual ~TestManifold() {}
   bool Plus(const double* x,
             const double* delta,
             double* x_plus_delta) const final {
     LOG(FATAL) << "Shouldn't get called.";
     return true;
   }
-  bool ComputeJacobian(const double* x, double* jacobian) const final {
+
+  bool PlusJacobian(const double* x, double* jacobian) const final {
     jacobian[0] = *x * 2;
     return true;
   }
 
-  int GlobalSize() const final { return 1; }
-  int LocalSize() const final { return 1; }
+  bool Minus(const double* y, const double* x, double* y_minus_x) const final {
+    LOG(FATAL) << "Shouldn't get called";
+    return true;
+  }
+
+  bool MinusJacobian(const double* x, double* jacobian) const final {
+    jacobian[0] = *x * 2;
+    return true;
+  }
+
+  int AmbientSize() const final { return 1; }
+  int TangentSize() const final { return 1; }
 };
 
-TEST(ParameterBlock, SetStateUpdatesLocalParameterizationJacobian) {
-  TestParameterization test_parameterization;
+TEST(ParameterBlock, SetStateUpdatesPlusJacobian) {
+  TestManifold test_manifold;
   double x[1] = {1.0};
-  ParameterBlock parameter_block(x, 1, -1, &test_parameterization);
+  ParameterBlock parameter_block(x, 1, -1, &test_manifold);
 
-  EXPECT_EQ(2.0, *parameter_block.LocalParameterizationJacobian());
+  EXPECT_EQ(2.0, *parameter_block.PlusJacobian());
 
   x[0] = 5.5;
   parameter_block.SetState(x);
-  EXPECT_EQ(11.0, *parameter_block.LocalParameterizationJacobian());
+  EXPECT_EQ(11.0, *parameter_block.PlusJacobian());
 }
 
-TEST(ParameterBlock, PlusWithNoLocalParameterization) {
+TEST(ParameterBlock, PlusWithNoManifold) {
   double x[2] = {1.0, 2.0};
   ParameterBlock parameter_block(x, 2, -1);
 
@@ -151,12 +161,12 @@
   EXPECT_EQ(2.3, x_plus_delta[1]);
 }
 
-// Stops computing the jacobian after the first time.
-class BadLocalParameterization : public LocalParameterization {
+// Stops computing the plus_jacobian after the first time.
+class BadManifold : public Manifold {
  public:
-  BadLocalParameterization() : calls_(0) {}
+  BadManifold() : calls_(0) {}
 
-  virtual ~BadLocalParameterization() {}
+  virtual ~BadManifold() {}
   bool Plus(const double* x,
             const double* delta,
             double* x_plus_delta) const final {
@@ -164,7 +174,7 @@
     return true;
   }
 
-  bool ComputeJacobian(const double* x, double* jacobian) const final {
+  bool PlusJacobian(const double* x, double* jacobian) const final {
     if (calls_ == 0) {
       jacobian[0] = 0;
     }
@@ -172,17 +182,27 @@
     return true;
   }
 
-  int GlobalSize() const final { return 1; }
-  int LocalSize() const final { return 1; }
+  bool Minus(const double* y, const double* x, double* y_minus_x) const final {
+    LOG(FATAL) << "Shouldn't get called";
+    return true;
+  }
+
+  bool MinusJacobian(const double* x, double* jacobian) const final {
+    jacobian[0] = *x * 2;
+    return true;
+  }
+
+  int AmbientSize() const final { return 1; }
+  int TangentSize() const final { return 1; }
 
  private:
   mutable int calls_;
 };
 
-TEST(ParameterBlock, DetectBadLocalParameterization) {
+TEST(ParameterBlock, DetectBadManifold) {
   double x = 1;
-  BadLocalParameterization bad_parameterization;
-  ParameterBlock parameter_block(&x, 1, -1, &bad_parameterization);
+  BadManifold bad_manifold;
+  ParameterBlock parameter_block(&x, 1, -1, &bad_manifold);
   double y = 2;
   EXPECT_FALSE(parameter_block.SetState(&y));
 }
@@ -227,39 +247,39 @@
   EXPECT_EQ(x_plus_delta[1], -1.0);
 }
 
-TEST(ParameterBlock, ResetLocalParameterizationToNull) {
+TEST(ParameterBlock, ResetManifoldToNull) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
   std::vector<int> indices;
   indices.push_back(1);
-  SubsetParameterization subset(3, indices);
-  parameter_block.SetParameterization(&subset);
-  EXPECT_EQ(parameter_block.local_parameterization(), &subset);
-  parameter_block.SetParameterization(nullptr);
-  EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+  SubsetManifold subset(3, indices);
+  parameter_block.SetManifold(&subset);
+  EXPECT_EQ(parameter_block.manifold(), &subset);
+  parameter_block.SetManifold(nullptr);
+  EXPECT_EQ(parameter_block.manifold(), nullptr);
 }
 
-TEST(ParameterBlock, ResetLocalParameterizationToNotNull) {
+TEST(ParameterBlock, ResetManifoldToNotNull) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
   std::vector<int> indices;
   indices.push_back(1);
-  SubsetParameterization subset(3, indices);
-  parameter_block.SetParameterization(&subset);
-  EXPECT_EQ(parameter_block.local_parameterization(), &subset);
+  SubsetManifold subset(3, indices);
+  parameter_block.SetManifold(&subset);
+  EXPECT_EQ(parameter_block.manifold(), &subset);
 
-  SubsetParameterization subset_different(3, indices);
-  parameter_block.SetParameterization(&subset_different);
-  EXPECT_EQ(parameter_block.local_parameterization(), &subset_different);
+  SubsetManifold subset_different(3, indices);
+  parameter_block.SetManifold(&subset_different);
+  EXPECT_EQ(parameter_block.manifold(), &subset_different);
 }
 
-TEST(ParameterBlock, SetNullLocalParameterization) {
+TEST(ParameterBlock, SetNullManifold) {
   double x[3] = {1.0, 2.0, 3.0};
   ParameterBlock parameter_block(x, 3, -1);
-  EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+  EXPECT_EQ(parameter_block.manifold(), nullptr);
 
-  parameter_block.SetParameterization(nullptr);
-  EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+  parameter_block.SetManifold(nullptr);
+  EXPECT_EQ(parameter_block.manifold(), nullptr);
 }
 
 }  // namespace internal
diff --git a/internal/ceres/problem.cc b/internal/ceres/problem.cc
index de4a2b3..f4e9f87 100644
--- a/internal/ceres/problem.cc
+++ b/internal/ceres/problem.cc
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -76,6 +76,10 @@
   impl_->AddParameterBlock(values, size, local_parameterization);
 }
 
+void Problem::AddParameterBlock(double* values, int size, Manifold* manifold) {
+  impl_->AddParameterBlock(values, size, manifold);
+}
+
 void Problem::RemoveResidualBlock(ResidualBlockId residual_block) {
   impl_->RemoveResidualBlock(residual_block);
 }
@@ -110,6 +114,18 @@
   return impl_->HasParameterization(values);
 }
 
+void Problem::SetManifold(double* values, Manifold* manifold) {
+  impl_->SetManifold(values, manifold);
+}
+
+const Manifold* Problem::GetManifold(const double* values) const {
+  return impl_->GetManifold(values);
+}
+
+bool Problem::HasManifold(const double* values) const {
+  return impl_->HasManifold(values);
+}
+
 void Problem::SetParameterLowerBound(double* values,
                                      int index,
                                      double lower_bound) {
@@ -178,7 +194,11 @@
 }
 
 int Problem::ParameterBlockLocalSize(const double* parameter_block) const {
-  return impl_->ParameterBlockLocalSize(parameter_block);
+  return impl_->ParameterBlockTangentSize(parameter_block);
+}
+
+int Problem::ParameterBlockTangentSize(const double* parameter_block) const {
+  return impl_->ParameterBlockTangentSize(parameter_block);
 }
 
 bool Problem::HasParameterBlock(const double* values) const {
diff --git a/internal/ceres/problem_impl.cc b/internal/ceres/problem_impl.cc
index 492cabc..9e1a2c1 100644
--- a/internal/ceres/problem_impl.cc
+++ b/internal/ceres/problem_impl.cc
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -52,6 +52,8 @@
 #include "ceres/internal/fixed_array.h"
 #include "ceres/internal/port.h"
 #include "ceres/loss_function.h"
+#include "ceres/manifold.h"
+#include "ceres/manifold_adapter.h"
 #include "ceres/map_util.h"
 #include "ceres/parameter_block.h"
 #include "ceres/program.h"
@@ -64,11 +66,6 @@
 
 namespace ceres {
 namespace internal {
-
-using std::map;
-using std::string;
-using std::vector;
-
 namespace {
 // Returns true if two regions of memory, a and b, with sizes size_a and size_b
 // respectively, overlap.
@@ -225,15 +222,7 @@
 
 // Deletes the parameter block in question, assuming there are no other
 // references to it inside the problem (e.g. by any residual blocks).
-// Referenced parameterizations are tucked away for future deletion, since it
-// is not possible to know whether other parts of the problem depend on them
-// without doing a full scan.
 void ProblemImpl::DeleteBlock(ParameterBlock* parameter_block) {
-  if (options_.local_parameterization_ownership == TAKE_OWNERSHIP &&
-      parameter_block->local_parameterization() != nullptr) {
-    local_parameterizations_to_delete_.push_back(
-        parameter_block->mutable_local_parameterization());
-  }
   parameter_block_map_.erase(parameter_block->mutable_user_state());
   delete parameter_block;
 }
@@ -272,6 +261,10 @@
   STLDeleteUniqueContainerPointers(local_parameterizations_to_delete_.begin(),
                                    local_parameterizations_to_delete_.end());
 
+  // Delete the owned manifolds.
+  STLDeleteUniqueContainerPointers(manifolds_to_delete_.begin(),
+                                   manifolds_to_delete_.end());
+
   if (context_impl_owned_) {
     delete context_impl_;
   }
@@ -286,7 +279,7 @@
   CHECK_EQ(num_parameter_blocks, cost_function->parameter_block_sizes().size());
 
   // Check the sizes match.
-  const vector<int32_t>& parameter_block_sizes =
+  const std::vector<int32_t>& parameter_block_sizes =
       cost_function->parameter_block_sizes();
 
   if (!options_.disable_all_safety_checks) {
@@ -295,7 +288,7 @@
         << "that the cost function expects.";
 
     // Check for duplicate parameter blocks.
-    vector<double*> sorted_parameter_blocks(
+    std::vector<double*> sorted_parameter_blocks(
         parameter_blocks, parameter_blocks + num_parameter_blocks);
     sort(sorted_parameter_blocks.begin(), sorted_parameter_blocks.end());
     const bool has_duplicate_items =
@@ -303,7 +296,7 @@
                             sorted_parameter_blocks.end()) !=
          sorted_parameter_blocks.end());
     if (has_duplicate_items) {
-      string blocks;
+      std::string blocks;
       for (int i = 0; i < num_parameter_blocks; ++i) {
         blocks += StringPrintf(" %p ", parameter_blocks[i]);
       }
@@ -315,7 +308,7 @@
   }
 
   // Add parameter blocks and convert the double*'s to parameter blocks.
-  vector<ParameterBlock*> parameter_block_ptrs(num_parameter_blocks);
+  std::vector<ParameterBlock*> parameter_block_ptrs(num_parameter_blocks);
   for (int i = 0; i < num_parameter_blocks; ++i) {
     parameter_block_ptrs[i] = InternalAddParameterBlock(
         parameter_blocks[i], parameter_block_sizes[i]);
@@ -372,12 +365,50 @@
   InternalAddParameterBlock(values, size);
 }
 
+void ProblemImpl::InternalSetParameterization(
+    double* values,
+    ParameterBlock* parameter_block,
+    LocalParameterization* local_parameterization) {
+  parameter_block_to_local_param_[values] = local_parameterization;
+  Manifold* manifold = nullptr;
+  if (local_parameterization != nullptr) {
+    if (options_.local_parameterization_ownership == TAKE_OWNERSHIP) {
+      local_parameterizations_to_delete_.push_back(local_parameterization);
+    }
+
+    manifold = new ManifoldAdapter(local_parameterization);
+    // Add the manifold to manifolds_to_delete_ unconditionally since
+    // we own it and it will need to be deleted.
+    manifolds_to_delete_.push_back(manifold);
+  }
+
+  parameter_block->SetManifold(manifold);
+}
+
+void ProblemImpl::InternalSetManifold(double* values,
+                                      ParameterBlock* parameter_block,
+                                      Manifold* manifold) {
+  // Reset any association between this parameter block and a local
+  // parameterization. This only needs done while we are in the transition from
+  // LocalParameterization to Manifold.
+  parameter_block_to_local_param_[values] = nullptr;
+  if (manifold != nullptr && options_.manifold_ownership == TAKE_OWNERSHIP) {
+    manifolds_to_delete_.push_back(manifold);
+  }
+  parameter_block->SetManifold(manifold);
+}
+
 void ProblemImpl::AddParameterBlock(
     double* values, int size, LocalParameterization* local_parameterization) {
   ParameterBlock* parameter_block = InternalAddParameterBlock(values, size);
-  if (local_parameterization != nullptr) {
-    parameter_block->SetParameterization(local_parameterization);
-  }
+  InternalSetParameterization(values, parameter_block, local_parameterization);
+}
+
+void ProblemImpl::AddParameterBlock(double* values,
+                                    int size,
+                                    Manifold* manifold) {
+  ParameterBlock* parameter_block = InternalAddParameterBlock(values, size);
+  InternalSetManifold(values, parameter_block, manifold);
 }
 
 // Delete a block from a vector of blocks, maintaining the indexing invariant.
@@ -385,7 +416,7 @@
 // vector over the element to remove, then popping the last element. It
 // destroys the ordering in the interest of speed.
 template <typename Block>
-void ProblemImpl::DeleteBlockInVector(vector<Block*>* mutable_blocks,
+void ProblemImpl::DeleteBlockInVector(std::vector<Block*>* mutable_blocks,
                                       Block* block_to_remove) {
   CHECK_EQ((*mutable_blocks)[block_to_remove->index()], block_to_remove)
       << "You found a Ceres bug! \n"
@@ -411,7 +442,7 @@
   CHECK(residual_block != nullptr);
 
   // Verify that residual_block identifies a residual in the current problem.
-  const string residual_not_found_message = StringPrintf(
+  const std::string residual_not_found_message = StringPrintf(
       "Residual block to remove: %p not found. This usually means "
       "one of three things have happened:\n"
       " 1) residual_block is uninitialised and points to a random "
@@ -449,7 +480,7 @@
   if (options_.enable_fast_removal) {
     // Copy the dependent residuals from the parameter block because the set of
     // dependents will change after each call to RemoveResidualBlock().
-    vector<ResidualBlock*> residual_blocks_to_remove(
+    std::vector<ResidualBlock*> residual_blocks_to_remove(
         parameter_block->mutable_residual_blocks()->begin(),
         parameter_block->mutable_residual_blocks()->end());
     for (int i = 0; i < residual_blocks_to_remove.size(); ++i) {
@@ -518,32 +549,31 @@
                << "you can set its local parameterization.";
   }
 
-  // If the parameter block already has a local parameterization and
-  // we are to take ownership of local parameterizations, then add it
-  // to local_parameterizations_to_delete_ for eventual deletion.
-  if (parameter_block->local_parameterization_ &&
-      options_.local_parameterization_ownership == TAKE_OWNERSHIP) {
-    local_parameterizations_to_delete_.push_back(
-        parameter_block->local_parameterization_);
+  InternalSetParameterization(values, parameter_block, local_parameterization);
+}
+
+void ProblemImpl::SetManifold(double* values, Manifold* manifold) {
+  ParameterBlock* parameter_block =
+      FindWithDefault(parameter_block_map_, values, nullptr);
+  if (parameter_block == nullptr) {
+    LOG(FATAL) << "Parameter block not found: " << values
+               << ". You must add the parameter block to the problem before "
+               << "you can set its manifold.";
   }
 
-  parameter_block->SetParameterization(local_parameterization);
+  InternalSetManifold(values, parameter_block, manifold);
 }
 
 const LocalParameterization* ProblemImpl::GetParameterization(
     const double* values) const {
-  ParameterBlock* parameter_block = FindWithDefault(
-      parameter_block_map_, const_cast<double*>(values), nullptr);
-  if (parameter_block == nullptr) {
-    LOG(FATAL) << "Parameter block not found: " << values
-               << ". You must add the parameter block to the problem before "
-               << "you can get its local parameterization.";
-  }
-
-  return parameter_block->local_parameterization();
+  return FindWithDefault(parameter_block_to_local_param_, values, nullptr);
 }
 
 bool ProblemImpl::HasParameterization(const double* values) const {
+  return GetParameterization(values) != nullptr;
+}
+
+const Manifold* ProblemImpl::GetManifold(const double* values) const {
   ParameterBlock* parameter_block = FindWithDefault(
       parameter_block_map_, const_cast<double*>(values), nullptr);
   if (parameter_block == nullptr) {
@@ -552,7 +582,11 @@
                << "you can get its local parameterization.";
   }
 
-  return (parameter_block->local_parameterization() != nullptr);
+  return parameter_block->manifold();
+}
+
+bool ProblemImpl::HasManifold(const double* values) const {
+  return GetManifold(values) != nullptr;
 }
 
 void ProblemImpl::SetParameterLowerBound(double* values,
@@ -608,8 +642,8 @@
 
 bool ProblemImpl::Evaluate(const Problem::EvaluateOptions& evaluate_options,
                            double* cost,
-                           vector<double>* residuals,
-                           vector<double>* gradient,
+                           std::vector<double>* residuals,
+                           std::vector<double>* gradient,
                            CRSMatrix* jacobian) {
   if (cost == nullptr && residuals == nullptr && gradient == nullptr &&
       jacobian == nullptr) {
@@ -624,11 +658,11 @@
            ? evaluate_options.residual_blocks
            : program_->residual_blocks());
 
-  const vector<double*>& parameter_block_ptrs =
+  const std::vector<double*>& parameter_block_ptrs =
       evaluate_options.parameter_blocks;
 
-  vector<ParameterBlock*> variable_parameter_blocks;
-  vector<ParameterBlock*>& parameter_blocks =
+  std::vector<ParameterBlock*> variable_parameter_blocks;
+  std::vector<ParameterBlock*>& parameter_blocks =
       *program.mutable_parameter_blocks();
 
   if (parameter_block_ptrs.size() == 0) {
@@ -661,11 +695,12 @@
     // columns of the jacobian, we need to make sure that they are
     // constant during evaluation and then make them variable again
     // after we are done.
-    vector<ParameterBlock*> all_parameter_blocks(program_->parameter_blocks());
-    vector<ParameterBlock*> included_parameter_blocks(
+    std::vector<ParameterBlock*> all_parameter_blocks(
+        program_->parameter_blocks());
+    std::vector<ParameterBlock*> included_parameter_blocks(
         program.parameter_blocks());
 
-    vector<ParameterBlock*> excluded_parameter_blocks;
+    std::vector<ParameterBlock*> excluded_parameter_blocks;
     sort(all_parameter_blocks.begin(), all_parameter_blocks.end());
     sort(included_parameter_blocks.begin(), included_parameter_blocks.end());
     set_difference(all_parameter_blocks.begin(),
@@ -841,16 +876,16 @@
   return parameter_block->Size();
 }
 
-int ProblemImpl::ParameterBlockLocalSize(const double* values) const {
+int ProblemImpl::ParameterBlockTangentSize(const double* values) const {
   ParameterBlock* parameter_block = FindWithDefault(
       parameter_block_map_, const_cast<double*>(values), nullptr);
   if (parameter_block == nullptr) {
     LOG(FATAL) << "Parameter block not found: " << values
                << ". You must add the parameter block to the problem before "
-               << "you can get its local size.";
+               << "you can get its tangent size.";
   }
 
-  return parameter_block->LocalSize();
+  return parameter_block->TangentSize();
 }
 
 bool ProblemImpl::HasParameterBlock(const double* parameter_block) const {
@@ -858,7 +893,8 @@
           parameter_block_map_.end());
 }
 
-void ProblemImpl::GetParameterBlocks(vector<double*>* parameter_blocks) const {
+void ProblemImpl::GetParameterBlocks(
+    std::vector<double*>* parameter_blocks) const {
   CHECK(parameter_blocks != nullptr);
   parameter_blocks->resize(0);
   parameter_blocks->reserve(parameter_block_map_.size());
@@ -868,14 +904,14 @@
 }
 
 void ProblemImpl::GetResidualBlocks(
-    vector<ResidualBlockId>* residual_blocks) const {
+    std::vector<ResidualBlockId>* residual_blocks) const {
   CHECK(residual_blocks != nullptr);
   *residual_blocks = program().residual_blocks();
 }
 
 void ProblemImpl::GetParameterBlocksForResidualBlock(
     const ResidualBlockId residual_block,
-    vector<double*>* parameter_blocks) const {
+    std::vector<double*>* parameter_blocks) const {
   int num_parameter_blocks = residual_block->NumParameterBlocks();
   CHECK(parameter_blocks != nullptr);
   parameter_blocks->resize(num_parameter_blocks);
@@ -896,7 +932,7 @@
 }
 
 void ProblemImpl::GetResidualBlocksForParameterBlock(
-    const double* values, vector<ResidualBlockId>* residual_blocks) const {
+    const double* values, std::vector<ResidualBlockId>* residual_blocks) const {
   ParameterBlock* parameter_block = FindWithDefault(
       parameter_block_map_, const_cast<double*>(values), nullptr);
   if (parameter_block == nullptr) {
diff --git a/internal/ceres/problem_impl.h b/internal/ceres/problem_impl.h
index 5051c57..2a3e1e4 100644
--- a/internal/ceres/problem_impl.h
+++ b/internal/ceres/problem_impl.h
@@ -1,5 +1,5 @@
 // Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2021 Google Inc. All rights reserved.
 // http://ceres-solver.org/
 //
 // Redistribution and use in source and binary forms, with or without
@@ -42,11 +42,13 @@
 #include <array>
 #include <map>
 #include <memory>
+#include <unordered_map>
 #include <unordered_set>
 #include <vector>
 
 #include "ceres/context_impl.h"
 #include "ceres/internal/port.h"
+#include "ceres/manifold.h"
 #include "ceres/problem.h"
 #include "ceres/types.h"
 
@@ -100,6 +102,8 @@
                          int size,
                          LocalParameterization* local_parameterization);
 
+  void AddParameterBlock(double* values, int size, Manifold* manifold);
+
   void RemoveResidualBlock(ResidualBlock* residual_block);
   void RemoveParameterBlock(const double* values);
 
@@ -112,6 +116,10 @@
   const LocalParameterization* GetParameterization(const double* values) const;
   bool HasParameterization(const double* values) const;
 
+  void SetManifold(double* values, Manifold* manifold);
+  const Manifold* GetManifold(const double* values) const;
+  bool HasManifold(const double* values) const;
+
   void SetParameterLowerBound(double* values, int index, double lower_bound);
   void SetParameterUpperBound(double* values, int index, double upper_bound);
   double GetParameterLowerBound(const double* values, int index) const;
@@ -136,7 +144,7 @@
   int NumResiduals() const;
 
   int ParameterBlockSize(const double* parameter_block) const;
-  int ParameterBlockLocalSize(const double* parameter_block) const;
+  int ParameterBlockTangentSize(const double* parameter_block) const;
 
   bool HasParameterBlock(const double* parameter_block) const;
 
@@ -170,6 +178,14 @@
 
  private:
   ParameterBlock* InternalAddParameterBlock(double* values, int size);
+  void InternalSetParameterization(
+      double* values,
+      ParameterBlock* parameter_block,
+      LocalParameterization* local_parameterization);
+  void InternalSetManifold(double* values,
+                           ParameterBlock* parameter_block,
+                           Manifold* manifold);
+
   void InternalRemoveResidualBlock(ResidualBlock* residual_block);
 
   // Delete the arguments in question. These differ from the Remove* functions
@@ -195,20 +211,40 @@
   // The actual parameter and residual blocks.
   std::unique_ptr<internal::Program> program_;
 
+  // TODO(sameeragarwal): Unify the shared object handling across object types.
+  // Right now we are using vectors for LocalParameterization and Manifold
+  // objects and reference counting for CostFunctions and LossFunctions. Ideally
+  // this should be done uniformly.
+
   // When removing parameter blocks, parameterizations have ambiguous
   // ownership. Instead of scanning the entire problem to see if the
   // parameterization is shared with other parameter blocks, buffer
   // them until destruction.
-  //
-  // TODO(keir): See if it makes sense to use sets instead.
   std::vector<LocalParameterization*> local_parameterizations_to_delete_;
 
+  // When removing parameter blocks, manifolds have ambiguous
+  // ownership. Instead of scanning the entire problem to see if the
+  // manifold is shared with other parameter blocks, buffer
+  // them until destruction.
+  std::vector<Manifold*> manifolds_to_delete_;
+
   // For each cost function and loss function in the problem, a count
   // of the number of residual blocks that refer to them. When the
   // count goes to zero and the problem owns these objects, they are
   // destroyed.
   CostFunctionRefCount cost_function_ref_count_;
   LossFunctionRefCount loss_function_ref_count_;
+
+  // Because we wrap LocalParameterization objects using a ManifoldAdapter, when
+  // the user calls GetParameterization we cannot use the same logic as
+  // GetManifold as the ParameterBlock object only returns a Manifold object. So
+  // this map stores the association between parameter blocks and local
+  // parameterizations.
+  //
+  // This is a temporary object which will be removed once the
+  // LocalParameterization to Manifold transition is complete.
+  std::unordered_map<const double*, LocalParameterization*>
+      parameter_block_to_local_param_;
 };
 
 }  // namespace internal
diff --git a/internal/ceres/problem_test.cc b/internal/ceres/problem_test.cc
index 67faa7e..3d00f30 100644
--- a/internal/ceres/problem_test.cc
+++ b/internal/ceres/problem_test.cc
@@ -148,7 +148,7 @@
   problem.AddParameterBlock(y, 4);
   problem.AddParameterBlock(z, 5);
 
-  EXPECT_DEATH_IF_SUPPORTED(problem.AddResidualBlock(NULL, NULL, x),
+  EXPECT_DEATH_IF_SUPPORTED(problem.AddResidualBlock(nullptr, nullptr, x),
                             "cost_function != nullptr");
 }
 
@@ -162,7 +162,7 @@
 
   // UnaryCostFunction takes only one parameter, but two are passed.
   EXPECT_DEATH_IF_SUPPORTED(
-      problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x, y),
+      problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x, y),
       "num_parameter_blocks");
 }
 
@@ -170,10 +170,10 @@
   double x[3];
 
   Problem problem;
-  problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
   EXPECT_DEATH_IF_SUPPORTED(
       problem.AddResidualBlock(
-          new UnaryCostFunction(2, 4 /* 4 != 3 */), NULL, x),
+          new UnaryCostFunction(2, 4 /* 4 != 3 */), nullptr, x),
       "different block sizes");
 }
 
@@ -182,11 +182,11 @@
 
   Problem problem;
   EXPECT_DEATH_IF_SUPPORTED(
-      problem.AddResidualBlock(new BinaryCostFunction(2, 3, 3), NULL, x, x),
+      problem.AddResidualBlock(new BinaryCostFunction(2, 3, 3), nullptr, x, x),
       "Duplicate parameter blocks");
   EXPECT_DEATH_IF_SUPPORTED(
       problem.AddResidualBlock(
-          new TernaryCostFunction(1, 5, 3, 5), NULL, z, x, z),
+          new TernaryCostFunction(1, 5, 3, 5), nullptr, z, x, z),
       "Duplicate parameter blocks");
 }
 
@@ -201,7 +201,7 @@
   // The cost function expects the size of the second parameter, z, to be 4
   // instead of 5 as declared above. This is fatal.
   EXPECT_DEATH_IF_SUPPORTED(
-      problem.AddResidualBlock(new BinaryCostFunction(2, 3, 4), NULL, x, z),
+      problem.AddResidualBlock(new BinaryCostFunction(2, 3, 4), nullptr, x, z),
       "different block sizes");
 }
 
@@ -209,10 +209,10 @@
   double x[3], y[4], z[5];
 
   Problem problem;
-  problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
-  problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
-  problem.AddResidualBlock(new UnaryCostFunction(2, 4), NULL, y);
-  problem.AddResidualBlock(new UnaryCostFunction(2, 5), NULL, z);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 4), nullptr, y);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 5), nullptr, z);
 
   EXPECT_EQ(3, problem.NumParameterBlocks());
   EXPECT_EQ(12, problem.NumParameters());
@@ -278,15 +278,15 @@
 
   // Creating parameter blocks multiple times is ignored.
   problem.AddParameterBlock(x, 3);
-  problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
 
   // ... even repeatedly.
   problem.AddParameterBlock(x, 3);
-  problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
 
   // More parameters are fine.
   problem.AddParameterBlock(y, 4);
-  problem.AddResidualBlock(new UnaryCostFunction(2, 4), NULL, y);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 4), nullptr, y);
 
   EXPECT_EQ(2, problem.NumParameterBlocks());
   EXPECT_EQ(7, problem.NumParameters());
@@ -317,11 +317,12 @@
   EXPECT_EQ(4, problem.NumParameterBlocks());
   EXPECT_EQ(16, problem.NumParameters());
 
-  problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
-  problem.AddResidualBlock(new BinaryCostFunction(6, 5, 4), NULL, z, y);
-  problem.AddResidualBlock(new BinaryCostFunction(3, 3, 5), NULL, x, z);
-  problem.AddResidualBlock(new BinaryCostFunction(7, 5, 3), NULL, z, x);
-  problem.AddResidualBlock(new TernaryCostFunction(1, 5, 3, 4), NULL, z, x, y);
+  problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
+  problem.AddResidualBlock(new BinaryCostFunction(6, 5, 4), nullptr, z, y);
+  problem.AddResidualBlock(new BinaryCostFunction(3, 3, 5), nullptr, x, z);
+  problem.AddResidualBlock(new BinaryCostFunction(7, 5, 3), nullptr, z, x);
+  problem.AddResidualBlock(
+      new TernaryCostFunction(1, 5, 3, 4), nullptr, z, x, y);
 
   const int total_residuals = 2 + 6 + 3 + 7 + 1;
   EXPECT_EQ(problem.NumResidualBlocks(), 5);
@@ -357,9 +358,9 @@
     problem.AddParameterBlock(z, 5);
 
     CostFunction* cost = new DestructorCountingCostFunction(&num_destructions);
-    problem.AddResidualBlock(cost, NULL, y, z);
-    problem.AddResidualBlock(cost, NULL, y, z);
-    problem.AddResidualBlock(cost, NULL, y, z);
+    problem.AddResidualBlock(cost, nullptr, y, z);
+    problem.AddResidualBlock(cost, nullptr, y, z);
+    problem.AddResidualBlock(cost, nullptr, y, z);
     EXPECT_EQ(3, problem.NumResidualBlocks());
   }
 
@@ -372,10 +373,11 @@
   Problem problem;
   CostFunction* cost_function = new UnaryCostFunction(2, 3);
   const ResidualBlockId residual_block =
-      problem.AddResidualBlock(cost_function, NULL, x);
+      problem.AddResidualBlock(cost_function, nullptr, x);
   EXPECT_EQ(problem.GetCostFunctionForResidualBlock(residual_block),
             cost_function);
-  EXPECT_TRUE(problem.GetLossFunctionForResidualBlock(residual_block) == NULL);
+  EXPECT_TRUE(problem.GetLossFunctionForResidualBlock(residual_block) ==
+              nullptr);
 }
 
 TEST(Problem, GetLossFunctionForResidualBlock) {
@@ -403,8 +405,8 @@
         new DestructorCountingCostFunction(&num_destructions);
     CostFunction* cost_wz =
         new DestructorCountingCostFunction(&num_destructions);
-    ResidualBlock* r_yz = problem.AddResidualBlock(cost_yz, NULL, y, z);
-    ResidualBlock* r_wz = problem.AddResidualBlock(cost_wz, NULL, w, z);
+    ResidualBlock* r_yz = problem.AddResidualBlock(cost_yz, nullptr, y, z);
+    ResidualBlock* r_wz = problem.AddResidualBlock(cost_wz, nullptr, w, z);
     EXPECT_EQ(2, problem.NumResidualBlocks());
 
     problem.RemoveResidualBlock(r_yz);
@@ -587,6 +589,17 @@
       "Parameter block not found:");
 }
 
+TEST(Problem, SetManifoldWithUnknownPtrDies) {
+  double x[3];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 3);
+
+  EXPECT_DEATH_IF_SUPPORTED(problem.SetManifold(y, new EuclideanManifold(3)),
+                            "Parameter block not found:");
+}
+
 TEST(Problem, RemoveParameterBlockWithUnknownPtrDies) {
   double x[3];
   double y[2];
@@ -609,7 +622,21 @@
   LocalParameterization* parameterization = new IdentityParameterization(3);
   problem.SetParameterization(x, parameterization);
   EXPECT_EQ(problem.GetParameterization(x), parameterization);
-  EXPECT_TRUE(problem.GetParameterization(y) == NULL);
+  EXPECT_TRUE(problem.GetParameterization(y) == nullptr);
+}
+
+TEST(Problem, GetManifold) {
+  double x[3];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 3);
+  problem.AddParameterBlock(y, 2);
+
+  Manifold* manifold = new EuclideanManifold(3);
+  problem.SetManifold(x, manifold);
+  EXPECT_EQ(problem.GetManifold(x), manifold);
+  EXPECT_TRUE(problem.GetManifold(y) == nullptr);
 }
 
 TEST(Problem, HasParameterization) {
@@ -626,7 +653,277 @@
   EXPECT_FALSE(problem.HasParameterization(y));
 }
 
-TEST(Problem, ParameterBlockQueryTest) {
+TEST(Problem, HasManifold) {
+  double x[3];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 3);
+  problem.AddParameterBlock(y, 2);
+
+  Manifold* manifold = new EuclideanManifold(3);
+  problem.SetManifold(x, manifold);
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_FALSE(problem.HasManifold(y));
+}
+
+TEST(Problem, RepeatedAddParameterBlockResetsParameterization) {
+  double x[4];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 4, new SubsetParameterization(4, {0, 1}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_TRUE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+  EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 4);
+  EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 2);
+
+  problem.AddParameterBlock(x, 4, static_cast<LocalParameterization*>(nullptr));
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_FALSE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 4);
+  EXPECT_EQ(problem.GetManifold(x), nullptr);
+  EXPECT_EQ(problem.GetParameterization(x), nullptr);
+
+  problem.AddParameterBlock(x, 4, new SubsetParameterization(4, {0, 1, 2}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_TRUE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 1);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 1);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
+  EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 4);
+  EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 1);
+}
+
+TEST(Problem, RepeatedAddParameterBlockResetsManifold) {
+  double x[4];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 4, new SubsetManifold(4, {0, 1}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasManifold(y));
+
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+
+  problem.AddParameterBlock(x, 4, static_cast<Manifold*>(nullptr));
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_FALSE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 4);
+  EXPECT_EQ(problem.GetManifold(x), nullptr);
+  EXPECT_EQ(problem.GetParameterization(x), nullptr);
+
+  problem.AddParameterBlock(x, 4, new SubsetManifold(4, {0, 1, 2}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 1);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 1);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
+}
+
+TEST(Problem, AddParameterBlockWithManifoldNullLocalParameterization) {
+  double x[4];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 4, new SubsetManifold(4, {0, 1}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasManifold(y));
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+
+  problem.AddParameterBlock(x, 4, static_cast<LocalParameterization*>(nullptr));
+  EXPECT_FALSE(problem.HasParameterization(y));
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_FALSE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 4);
+  EXPECT_EQ(problem.GetManifold(x), nullptr);
+  EXPECT_EQ(problem.GetParameterization(x), nullptr);
+}
+
+TEST(Problem, AddParameterBlockWithLocalParameterizationNullManifold) {
+  double x[4];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 4, new SubsetParameterization(4, {0, 1}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasManifold(y));
+  EXPECT_TRUE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+
+  problem.AddParameterBlock(x, 4, static_cast<Manifold*>(nullptr));
+  EXPECT_FALSE(problem.HasParameterization(y));
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_FALSE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 4);
+  EXPECT_EQ(problem.GetManifold(x), nullptr);
+  EXPECT_EQ(problem.GetParameterization(x), nullptr);
+}
+
+TEST(Problem, RepeatedAddParameterBlockManifoldThenLocalParameterization) {
+  double x[4];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 4, new SubsetManifold(4, {0, 1}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasManifold(y));
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+
+  problem.AddParameterBlock(x, 4, new SubsetParameterization(4, {0, 1, 2}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_TRUE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 1);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 1);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
+  EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 4);
+  EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 1);
+}
+
+TEST(Problem, RepeatedAddParameterBlockLocalParameterizationThenManifold) {
+  double x[4];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 4, new SubsetParameterization(4, {0, 1, 2}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_TRUE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 1);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 1);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
+  EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 4);
+  EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 1);
+
+  problem.AddParameterBlock(x, 4, new SubsetManifold(4, {0, 1}));
+  problem.AddParameterBlock(y, 2);
+
+  EXPECT_FALSE(problem.HasManifold(y));
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+}
+
+TEST(Problem, ParameterizationAndManifoldSittingInATree) {
+  double x[4];
+  double y[2];
+
+  Problem problem;
+  problem.AddParameterBlock(x, 4);
+  problem.AddParameterBlock(y, 2);
+
+  problem.SetParameterization(x, new SubsetParameterization(4, {0, 1}));
+
+  EXPECT_FALSE(problem.HasParameterization(y));
+
+  EXPECT_TRUE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+  EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 4);
+  EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 2);
+
+  problem.SetManifold(x, new SubsetManifold(4, {1, 2, 3}));
+  EXPECT_FALSE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 1);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 1);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
+  EXPECT_EQ(problem.GetParameterization(x), nullptr);
+
+  problem.SetParameterization(x, new SubsetParameterization(4, {0, 1}));
+  EXPECT_TRUE(problem.HasParameterization(x));
+  EXPECT_TRUE(problem.HasManifold(x));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+  EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 4);
+  EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 2);
+};
+
+TEST(Problem, ParameterBlockQueryTestUsingLocalParameterization) {
   double x[3];
   double y[4];
   Problem problem;
@@ -656,6 +953,64 @@
   EXPECT_TRUE(parameter_blocks[0] == y);
 }
 
+TEST(Problem, ParameterBlockQueryTestUsingManifold) {
+  double x[3];
+  double y[4];
+  Problem problem;
+  problem.AddParameterBlock(x, 3);
+  problem.AddParameterBlock(y, 4);
+
+  vector<int> constant_parameters;
+  constant_parameters.push_back(0);
+  problem.SetManifold(x, new SubsetManifold(3, constant_parameters));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 3);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(y), 4);
+
+  vector<double*> parameter_blocks;
+  problem.GetParameterBlocks(&parameter_blocks);
+  EXPECT_EQ(parameter_blocks.size(), 2);
+  EXPECT_NE(parameter_blocks[0], parameter_blocks[1]);
+  EXPECT_TRUE(parameter_blocks[0] == x || parameter_blocks[0] == y);
+  EXPECT_TRUE(parameter_blocks[1] == x || parameter_blocks[1] == y);
+
+  EXPECT_TRUE(problem.HasParameterBlock(x));
+  problem.RemoveParameterBlock(x);
+  EXPECT_FALSE(problem.HasParameterBlock(x));
+  problem.GetParameterBlocks(&parameter_blocks);
+  EXPECT_EQ(parameter_blocks.size(), 1);
+  EXPECT_TRUE(parameter_blocks[0] == y);
+}
+
+TEST(Problem, ParameterBlockQueryTest) {
+  double x[3];
+  double y[4];
+  Problem problem;
+  problem.AddParameterBlock(x, 3);
+  problem.AddParameterBlock(y, 4);
+
+  vector<int> constant_parameters;
+  constant_parameters.push_back(0);
+  problem.SetManifold(x, new SubsetManifold(3, constant_parameters));
+  EXPECT_EQ(problem.ParameterBlockSize(x), 3);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
+  EXPECT_EQ(problem.ParameterBlockLocalSize(y), 4);
+
+  vector<double*> parameter_blocks;
+  problem.GetParameterBlocks(&parameter_blocks);
+  EXPECT_EQ(parameter_blocks.size(), 2);
+  EXPECT_NE(parameter_blocks[0], parameter_blocks[1]);
+  EXPECT_TRUE(parameter_blocks[0] == x || parameter_blocks[0] == y);
+  EXPECT_TRUE(parameter_blocks[1] == x || parameter_blocks[1] == y);
+
+  EXPECT_TRUE(problem.HasParameterBlock(x));
+  problem.RemoveParameterBlock(x);
+  EXPECT_FALSE(problem.HasParameterBlock(x));
+  problem.GetParameterBlocks(&parameter_blocks);
+  EXPECT_EQ(parameter_blocks.size(), 1);
+  EXPECT_TRUE(parameter_blocks[0] == y);
+}
+
 TEST_P(DynamicProblem, RemoveParameterBlockWithNoResiduals) {
   problem->AddParameterBlock(y, 4);
   problem->AddParameterBlock(z, 5);
@@ -734,13 +1089,13 @@
   CostFunction* cost_z   = new UnaryCostFunction  (1, 5);
   CostFunction* cost_w   = new UnaryCostFunction  (1, 3);
 
-  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
-  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  NULL, y, z);
-  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  NULL, y, w);
-  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  NULL, z, w);
-  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   NULL, y);
-  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   NULL, z);
-  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   NULL, w);
+  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
+  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  nullptr, y, z);
+  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  nullptr, y, w);
+  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  nullptr, z, w);
+  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   nullptr, y);
+  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   nullptr, z);
+  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   nullptr, w);
 
   EXPECT_EQ(3, problem->NumParameterBlocks());
   EXPECT_EQ(7, NumResidualBlocks());
@@ -795,13 +1150,13 @@
   CostFunction* cost_z   = new UnaryCostFunction  (1, 5);
   CostFunction* cost_w   = new UnaryCostFunction  (1, 3);
 
-  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
-  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  NULL, y, z);
-  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  NULL, y, w);
-  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  NULL, z, w);
-  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   NULL, y);
-  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   NULL, z);
-  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   NULL, w);
+  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
+  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  nullptr, y, z);
+  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  nullptr, y, w);
+  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  nullptr, z, w);
+  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   nullptr, y);
+  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   nullptr, z);
+  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   nullptr, w);
 
   if (GetParam()) {
     // In this test parameterization, there should be back-pointers from the
@@ -811,9 +1166,9 @@
     ExpectParameterBlockContains(w, r_yzw, r_yw, r_zw, r_w);
   } else {
     // Otherwise, nothing.
-    EXPECT_TRUE(GetParameterBlock(0)->mutable_residual_blocks() == NULL);
-    EXPECT_TRUE(GetParameterBlock(1)->mutable_residual_blocks() == NULL);
-    EXPECT_TRUE(GetParameterBlock(2)->mutable_residual_blocks() == NULL);
+    EXPECT_TRUE(GetParameterBlock(0)->mutable_residual_blocks() == nullptr);
+    EXPECT_TRUE(GetParameterBlock(1)->mutable_residual_blocks() == nullptr);
+    EXPECT_TRUE(GetParameterBlock(2)->mutable_residual_blocks() == nullptr);
   }
   EXPECT_EQ(3, problem->NumParameterBlocks());
   EXPECT_EQ(7, NumResidualBlocks());
@@ -920,13 +1275,13 @@
   CostFunction* cost_z   = new UnaryCostFunction  (1, 5);
   CostFunction* cost_w   = new UnaryCostFunction  (1, 3);
 
-  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
-  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  NULL, y, z);
-  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  NULL, y, w);
-  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  NULL, z, w);
-  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   NULL, y);
-  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   NULL, z);
-  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   NULL, w);
+  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
+  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  nullptr, y, z);
+  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  nullptr, y, w);
+  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  nullptr, z, w);
+  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   nullptr, y);
+  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   nullptr, z);
+  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   nullptr, w);
 
   // clang-format on
 
@@ -1007,39 +1362,39 @@
   CostFunction* cost_z   = new UnaryCostFunction  (1, 5);
   CostFunction* cost_w   = new UnaryCostFunction  (1, 3);
 
-  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
+  ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
   {
     ResidualBlockId expected_residuals[] = {r_yzw, 0};
     ExpectProblemHasResidualBlocks(*problem, expected_residuals);
   }
-  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  NULL, y, z);
+  ResidualBlock* r_yz  = problem->AddResidualBlock(cost_yz,  nullptr, y, z);
   {
     ResidualBlockId expected_residuals[] = {r_yzw, r_yz, 0};
     ExpectProblemHasResidualBlocks(*problem, expected_residuals);
   }
-  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  NULL, y, w);
+  ResidualBlock* r_yw  = problem->AddResidualBlock(cost_yw,  nullptr, y, w);
   {
     ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, 0};
     ExpectProblemHasResidualBlocks(*problem, expected_residuals);
   }
-  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  NULL, z, w);
+  ResidualBlock* r_zw  = problem->AddResidualBlock(cost_zw,  nullptr, z, w);
   {
     ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, r_zw, 0};
     ExpectProblemHasResidualBlocks(*problem, expected_residuals);
   }
-  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   NULL, y);
+  ResidualBlock* r_y   = problem->AddResidualBlock(cost_y,   nullptr, y);
   {
     ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, r_zw, r_y, 0};
     ExpectProblemHasResidualBlocks(*problem, expected_residuals);
   }
-  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   NULL, z);
+  ResidualBlock* r_z   = problem->AddResidualBlock(cost_z,   nullptr, z);
   {
     ResidualBlock *expected_residuals[] = {
       r_yzw, r_yz, r_yw, r_zw, r_y, r_z, 0
     };
     ExpectProblemHasResidualBlocks(*problem, expected_residuals);
   }
-  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   NULL, w);
+  ResidualBlock* r_w   = problem->AddResidualBlock(cost_w,   nullptr, w);
   {
     ResidualBlock *expected_residuals[] = {
       r_yzw, r_yz, r_yw, r_zw, r_y, r_z, r_w, 0
@@ -1056,10 +1411,10 @@
     ResidualBlockId expected_residual_blocks[10];
   };
   GetResidualBlocksForParameterBlockTestCase get_residual_blocks_cases[] = {
-    { y, { r_yzw, r_yz, r_yw, r_y, NULL} },
-    { z, { r_yzw, r_yz, r_zw, r_z, NULL} },
-    { w, { r_yzw, r_yw, r_zw, r_w, NULL} },
-    { NULL, { NULL } }
+    { y, { r_yzw, r_yz, r_yw, r_y, nullptr} },
+    { z, { r_yzw, r_yz, r_zw, r_z, nullptr} },
+    { w, { r_yzw, r_yw, r_zw, r_w, nullptr} },
+    { nullptr, { nullptr } }
   };
   for (int i = 0; get_residual_blocks_cases[i].parameter_block; ++i) {
     problem->GetResidualBlocksForParameterBlock(
@@ -1076,14 +1431,14 @@
     double* expected_parameter_blocks[10];
   };
   GetParameterBlocksForResidualBlockTestCase get_parameter_blocks_cases[] = {
-    { r_yzw, { y, z, w, NULL } },
-    { r_yz , { y, z, NULL } },
-    { r_yw , { y, w, NULL } },
-    { r_zw , { z, w, NULL } },
-    { r_y  , { y, NULL } },
-    { r_z  , { z, NULL } },
-    { r_w  , { w, NULL } },
-    { NULL, { NULL } }
+    { r_yzw, { y, z, w, nullptr } },
+    { r_yz , { y, z, nullptr } },
+    { r_yw , { y, w, nullptr } },
+    { r_zw , { z, w, nullptr } },
+    { r_y  , { y, nullptr } },
+    { r_z  , { z, nullptr } },
+    { r_w  , { w, nullptr } },
+    { nullptr, { nullptr } }
   };
   for (int i = 0; get_parameter_blocks_cases[i].residual_block; ++i) {
     problem->GetParameterBlocksForResidualBlock(
@@ -1126,12 +1481,12 @@
       }
     }
 
-    if (jacobians == NULL) {
+    if (jacobians == nullptr) {
       return true;
     }
 
     for (int j = 0; j < kNumParameterBlocks; ++j) {
-      if (jacobians[j] != NULL) {
+      if (jacobians[j] != nullptr) {
         MatrixRef(jacobians[j], kNumResiduals, kNumResiduals) =
             (-2.0 * (j + 1.0) * ConstVectorRef(parameters[j], kNumResiduals))
                 .asDiagonal();
@@ -1171,13 +1526,13 @@
 
     // f(x, y)
     residual_blocks_.push_back(problem_.AddResidualBlock(
-        cost_function, NULL, parameters_, parameters_ + 2));
+        cost_function, nullptr, parameters_, parameters_ + 2));
     // g(y, z)
     residual_blocks_.push_back(problem_.AddResidualBlock(
-        cost_function, NULL, parameters_ + 2, parameters_ + 4));
+        cost_function, nullptr, parameters_ + 2, parameters_ + 4));
     // h(z, x)
     residual_blocks_.push_back(problem_.AddResidualBlock(
-        cost_function, NULL, parameters_ + 4, parameters_));
+        cost_function, nullptr, parameters_ + 4, parameters_));
   }
 
   void TearDown() { EXPECT_TRUE(problem_.program().IsValid()); }
@@ -1197,25 +1552,25 @@
     EXPECT_TRUE(
         problem_.Evaluate(options,
                           &cost,
-                          expected_residuals != NULL ? &residuals : NULL,
-                          expected_gradient != NULL ? &gradient : NULL,
-                          expected_jacobian != NULL ? &jacobian : NULL));
+                          expected_residuals != nullptr ? &residuals : nullptr,
+                          expected_gradient != nullptr ? &gradient : nullptr,
+                          expected_jacobian != nullptr ? &jacobian : nullptr));
 
-    if (expected_residuals != NULL) {
+    if (expected_residuals != nullptr) {
       EXPECT_EQ(residuals.size(), expected_num_rows);
     }
 
-    if (expected_gradient != NULL) {
+    if (expected_gradient != nullptr) {
       EXPECT_EQ(gradient.size(), expected_num_cols);
     }
 
-    if (expected_jacobian != NULL) {
+    if (expected_jacobian != nullptr) {
       EXPECT_EQ(jacobian.num_rows, expected_num_rows);
       EXPECT_EQ(jacobian.num_cols, expected_num_cols);
     }
 
     Matrix dense_jacobian;
-    if (expected_jacobian != NULL) {
+    if (expected_jacobian != nullptr) {
       CRSToDenseMatrix(jacobian, &dense_jacobian);
     }
 
@@ -1226,8 +1581,8 @@
                        expected_gradient,
                        expected_jacobian,
                        cost,
-                       residuals.size() > 0 ? &residuals[0] : NULL,
-                       gradient.size() > 0 ? &gradient[0] : NULL,
+                       residuals.size() > 0 ? &residuals[0] : nullptr,
+                       gradient.size() > 0 ? &gradient[0] : nullptr,
                        dense_jacobian.data());
   }
 
@@ -1238,9 +1593,9 @@
                          expected.num_rows,
                          expected.num_cols,
                          expected.cost,
-                         (i & 1) ? expected.residuals : NULL,
-                         (i & 2) ? expected.gradient : NULL,
-                         (i & 4) ? expected.jacobian : NULL);
+                         (i & 1) ? expected.residuals : nullptr,
+                         (i & 2) ? expected.gradient : nullptr,
+                         (i & 4) ? expected.jacobian : nullptr);
     }
   }
 
@@ -1581,6 +1936,43 @@
   CheckAllEvaluationCombinations(Problem::EvaluateOptions(), expected);
 }
 
+TEST_F(ProblemEvaluateTest, Manifold) {
+  // clang-format off
+  ExpectedEvaluation expected = {
+    // Rows/columns
+    6, 5,
+    // Cost
+    7607.0,
+    // Residuals
+    { -19.0, -35.0,  // f
+      -59.0, -87.0,  // g
+      -27.0, -43.0   // h
+    },
+    // Gradient
+    {  146.0,  484.0,  // x
+      1256.0,          // y with SubsetParameterization
+      1450.0, 2604.0,  // z
+    },
+    // Jacobian
+    //                       x      y             z
+    { /* f(x, y) */ -2.0,  0.0,   0.0,   0.0,   0.0,
+                     0.0, -4.0, -16.0,   0.0,   0.0,
+      /* g(y, z) */  0.0,  0.0,   0.0, -20.0,   0.0,
+                     0.0,  0.0,  -8.0,   0.0, -24.0,
+      /* h(z, x) */ -4.0,  0.0,   0.0, -10.0,   0.0,
+                     0.0, -8.0,   0.0,   0.0, -12.0
+    }
+  };
+  // clang-format on
+
+  vector<int> constant_parameters;
+  constant_parameters.push_back(0);
+  problem_.SetManifold(parameters_ + 2,
+                       new SubsetManifold(2, constant_parameters));
+
+  CheckAllEvaluationCombinations(Problem::EvaluateOptions(), expected);
+}
+
 struct IdentityFunctor {
   template <typename T>
   bool operator()(const T* x, const T* y, T* residuals) const {
@@ -1907,6 +2299,49 @@
       << actual_dfdy;
 }
 
+TEST_F(ProblemEvaluateResidualBlockTest, OneResidualBlockWithOneManifold) {
+  ResidualBlockId residual_block_id =
+      problem_.AddResidualBlock(IdentityFunctor::Create(), nullptr, x_, y_);
+  problem_.SetManifold(x_, new SubsetManifold(2, {1}));
+
+  Vector expected_f(5);
+  expected_f << 1, 2, 1, 2, 3;
+  Matrix expected_dfdx = Matrix::Zero(5, 1);
+  expected_dfdx.block(0, 0, 1, 1) = Matrix::Identity(1, 1);
+  Matrix expected_dfdy = Matrix::Zero(5, 3);
+  expected_dfdy.block(2, 0, 3, 3) = Matrix::Identity(3, 3);
+  double expected_cost = expected_f.squaredNorm() / 2.0;
+
+  double actual_cost;
+  Vector actual_f(5);
+  Matrix actual_dfdx(5, 1);
+  Matrix actual_dfdy(5, 3);
+  double* jacobians[2] = {actual_dfdx.data(), actual_dfdy.data()};
+  EXPECT_TRUE(problem_.EvaluateResidualBlock(residual_block_id,
+                                             kApplyLossFunction,
+                                             kNewPoint,
+                                             &actual_cost,
+                                             actual_f.data(),
+                                             jacobians));
+
+  EXPECT_NEAR(std::abs(expected_cost - actual_cost) / actual_cost,
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_cost;
+  EXPECT_NEAR((expected_f - actual_f).norm() / actual_f.norm(),
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_f;
+  EXPECT_NEAR((expected_dfdx - actual_dfdx).norm() / actual_dfdx.norm(),
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_dfdx;
+  EXPECT_NEAR((expected_dfdy - actual_dfdy).norm() / actual_dfdy.norm(),
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_dfdy;
+}
+
 TEST_F(ProblemEvaluateResidualBlockTest,
        OneResidualBlockWithTwoLocalParameterizations) {
   ResidualBlockId residual_block_id =
@@ -1952,6 +2387,50 @@
       << actual_dfdy;
 }
 
+TEST_F(ProblemEvaluateResidualBlockTest, OneResidualBlockWithTwoManifolds) {
+  ResidualBlockId residual_block_id =
+      problem_.AddResidualBlock(IdentityFunctor::Create(), nullptr, x_, y_);
+  problem_.SetManifold(x_, new SubsetManifold(2, {1}));
+  problem_.SetManifold(y_, new SubsetManifold(3, {2}));
+
+  Vector expected_f(5);
+  expected_f << 1, 2, 1, 2, 3;
+  Matrix expected_dfdx = Matrix::Zero(5, 1);
+  expected_dfdx.block(0, 0, 1, 1) = Matrix::Identity(1, 1);
+  Matrix expected_dfdy = Matrix::Zero(5, 2);
+  expected_dfdy.block(2, 0, 2, 2) = Matrix::Identity(2, 2);
+  double expected_cost = expected_f.squaredNorm() / 2.0;
+
+  double actual_cost;
+  Vector actual_f(5);
+  Matrix actual_dfdx(5, 1);
+  Matrix actual_dfdy(5, 2);
+  double* jacobians[2] = {actual_dfdx.data(), actual_dfdy.data()};
+  EXPECT_TRUE(problem_.EvaluateResidualBlock(residual_block_id,
+                                             kApplyLossFunction,
+                                             kNewPoint,
+                                             &actual_cost,
+                                             actual_f.data(),
+                                             jacobians));
+
+  EXPECT_NEAR(std::abs(expected_cost - actual_cost) / actual_cost,
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_cost;
+  EXPECT_NEAR((expected_f - actual_f).norm() / actual_f.norm(),
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_f;
+  EXPECT_NEAR((expected_dfdx - actual_dfdx).norm() / actual_dfdx.norm(),
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_dfdx;
+  EXPECT_NEAR((expected_dfdy - actual_dfdy).norm() / actual_dfdy.norm(),
+              0,
+              std::numeric_limits<double>::epsilon())
+      << actual_dfdy;
+}
+
 TEST_F(ProblemEvaluateResidualBlockTest,
        OneResidualBlockWithOneConstantParameterBlock) {
   ResidualBlockId residual_block_id =
@@ -2166,6 +2645,19 @@
   EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 1);
 }
 
+TEST(Problem, SetManifoldTwice) {
+  Problem problem;
+  double x[] = {1.0, 2.0, 3.0};
+  problem.AddParameterBlock(x, 3);
+  problem.SetManifold(x, new SubsetManifold(3, {1}));
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 3);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+
+  problem.SetManifold(x, new SubsetManifold(3, {0, 1}));
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 3);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
+}
+
 TEST(Problem, SetParameterizationAndThenClearItWithNull) {
   Problem problem;
   double x[] = {1.0, 2.0, 3.0};
@@ -2180,6 +2672,20 @@
   EXPECT_EQ(problem.ParameterBlockSize(x), 3);
 }
 
+TEST(Problem, SetManifoldAndThenClearItWithNull) {
+  Problem problem;
+  double x[] = {1.0, 2.0, 3.0};
+  problem.AddParameterBlock(x, 3);
+  problem.SetManifold(x, new SubsetManifold(3, {1}));
+  EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 3);
+  EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+
+  problem.SetManifold(x, nullptr);
+  EXPECT_EQ(problem.GetManifold(x), nullptr);
+  EXPECT_EQ(problem.ParameterBlockTangentSize(x), 3);
+  EXPECT_EQ(problem.ParameterBlockSize(x), 3);
+}
+
 TEST(Solver, ZeroSizedLocalParameterizationMeansParameterBlockIsConstant) {
   double x = 0.0;
   double y = 1.0;
@@ -2189,6 +2695,15 @@
   EXPECT_TRUE(problem.IsParameterBlockConstant(&y));
 }
 
+TEST(Solver, ZeroTangentSizedManifoldMeansParameterBlockIsConstant) {
+  double x = 0.0;
+  double y = 1.0;
+  Problem problem;
+  problem.AddResidualBlock(new BinaryCostFunction(1, 1, 1), nullptr, &x, &y);
+  problem.SetManifold(&y, new SubsetManifold(1, {0}));
+  EXPECT_TRUE(problem.IsParameterBlockConstant(&y));
+}
+
 class MockEvaluationCallback : public EvaluationCallback {
  public:
   MOCK_METHOD2(PrepareForEvaluation, void(bool, bool));
diff --git a/internal/ceres/program.cc b/internal/ceres/program.cc
index f1ded2e..341564b 100644
--- a/internal/ceres/program.cc
+++ b/internal/ceres/program.cc
@@ -41,8 +41,8 @@
 #include "ceres/cost_function.h"
 #include "ceres/evaluator.h"
 #include "ceres/internal/port.h"
-#include "ceres/local_parameterization.h"
 #include "ceres/loss_function.h"
+#include "ceres/manifold.h"
 #include "ceres/map_util.h"
 #include "ceres/parameter_block.h"
 #include "ceres/problem.h"
@@ -53,11 +53,6 @@
 namespace ceres {
 namespace internal {
 
-using std::max;
-using std::set;
-using std::string;
-using std::vector;
-
 Program::Program() {}
 
 Program::Program(const Program& program)
@@ -65,19 +60,19 @@
       residual_blocks_(program.residual_blocks_),
       evaluation_callback_(program.evaluation_callback_) {}
 
-const vector<ParameterBlock*>& Program::parameter_blocks() const {
+const std::vector<ParameterBlock*>& Program::parameter_blocks() const {
   return parameter_blocks_;
 }
 
-const vector<ResidualBlock*>& Program::residual_blocks() const {
+const std::vector<ResidualBlock*>& Program::residual_blocks() const {
   return residual_blocks_;
 }
 
-vector<ParameterBlock*>* Program::mutable_parameter_blocks() {
+std::vector<ParameterBlock*>* Program::mutable_parameter_blocks() {
   return &parameter_blocks_;
 }
 
-vector<ResidualBlock*>* Program::mutable_residual_blocks() {
+std::vector<ResidualBlock*>* Program::mutable_residual_blocks() {
   return &residual_blocks_;
 }
 
@@ -127,7 +122,7 @@
       return false;
     }
     state += parameter_blocks_[i]->Size();
-    delta += parameter_blocks_[i]->LocalSize();
+    delta += parameter_blocks_[i]->TangentSize();
     state_plus_delta += parameter_blocks_[i]->Size();
   }
   return true;
@@ -150,7 +145,7 @@
     parameter_blocks_[i]->set_state_offset(state_offset);
     parameter_blocks_[i]->set_delta_offset(delta_offset);
     state_offset += parameter_blocks_[i]->Size();
-    delta_offset += parameter_blocks_[i]->LocalSize();
+    delta_offset += parameter_blocks_[i]->TangentSize();
   }
 }
 
@@ -178,13 +173,13 @@
     }
 
     state_offset += parameter_blocks_[i]->Size();
-    delta_offset += parameter_blocks_[i]->LocalSize();
+    delta_offset += parameter_blocks_[i]->TangentSize();
   }
 
   return true;
 }
 
-bool Program::ParameterBlocksAreFinite(string* message) const {
+bool Program::ParameterBlocksAreFinite(std::string* message) const {
   CHECK(message != nullptr);
   for (int i = 0; i < parameter_blocks_.size(); ++i) {
     const ParameterBlock* parameter_block = parameter_blocks_[i];
@@ -225,7 +220,7 @@
   return false;
 }
 
-bool Program::IsFeasible(string* message) const {
+bool Program::IsFeasible(std::string* message) const {
   CHECK(message != nullptr);
   for (int i = 0; i < parameter_blocks_.size(); ++i) {
     const ParameterBlock* parameter_block = parameter_blocks_[i];
@@ -285,9 +280,9 @@
 }
 
 Program* Program::CreateReducedProgram(
-    vector<double*>* removed_parameter_blocks,
+    std::vector<double*>* removed_parameter_blocks,
     double* fixed_cost,
-    string* error) const {
+    std::string* error) const {
   CHECK(removed_parameter_blocks != nullptr);
   CHECK(fixed_cost != nullptr);
   CHECK(error != nullptr);
@@ -302,9 +297,9 @@
   return reduced_program.release();
 }
 
-bool Program::RemoveFixedBlocks(vector<double*>* removed_parameter_blocks,
+bool Program::RemoveFixedBlocks(std::vector<double*>* removed_parameter_blocks,
                                 double* fixed_cost,
-                                string* error) {
+                                std::string* error) {
   CHECK(removed_parameter_blocks != nullptr);
   CHECK(fixed_cost != nullptr);
   CHECK(error != nullptr);
@@ -412,7 +407,7 @@
 }
 
 bool Program::IsParameterBlockSetIndependent(
-    const set<double*>& independent_set) const {
+    const std::set<double*>& independent_set) const {
   // Loop over each residual block and ensure that no two parameter
   // blocks in the same residual block are part of
   // parameter_block_ptrs as that would violate the assumption that it
@@ -500,7 +495,7 @@
 int Program::NumEffectiveParameters() const {
   int num_parameters = 0;
   for (int i = 0; i < parameter_blocks_.size(); ++i) {
-    num_parameters += parameter_blocks_[i]->LocalSize();
+    num_parameters += parameter_blocks_[i]->TangentSize();
   }
   return num_parameters;
 }
@@ -513,8 +508,8 @@
   int max_scratch_bytes_for_evaluate = 0;
   for (int i = 0; i < residual_blocks_.size(); ++i) {
     max_scratch_bytes_for_evaluate =
-        max(max_scratch_bytes_for_evaluate,
-            residual_blocks_[i]->NumScratchDoublesForEvaluate());
+        std::max(max_scratch_bytes_for_evaluate,
+                 residual_blocks_[i]->NumScratchDoublesForEvaluate());
   }
   return max_scratch_bytes_for_evaluate;
 }
@@ -527,9 +522,9 @@
     int num_parameters = residual_block->NumParameterBlocks();
     for (int j = 0; j < num_parameters; ++j) {
       derivatives += residual_block->NumResiduals() *
-                     residual_block->parameter_blocks()[j]->LocalSize();
+                     residual_block->parameter_blocks()[j]->TangentSize();
     }
-    max_derivatives = max(max_derivatives, derivatives);
+    max_derivatives = std::max(max_derivatives, derivatives);
   }
   return max_derivatives;
 }
@@ -538,7 +533,7 @@
   int max_parameters = 0;
   for (int i = 0; i < residual_blocks_.size(); ++i) {
     max_parameters =
-        max(max_parameters, residual_blocks_[i]->NumParameterBlocks());
+        std::max(max_parameters, residual_blocks_[i]->NumParameterBlocks());
   }
   return max_parameters;
 }
@@ -546,13 +541,14 @@
 int Program::MaxResidualsPerResidualBlock() const {
   int max_residuals = 0;
   for (int i = 0; i < residual_blocks_.size(); ++i) {
-    max_residuals = max(max_residuals, residual_blocks_[i]->NumResiduals());
+    max_residuals =
+        std::max(max_residuals, residual_blocks_[i]->NumResiduals());
   }
   return max_residuals;
 }
 
-string Program::ToString() const {
-  string ret = "Program dump\n";
+std::string Program::ToString() const {
+  std::string ret = "Program dump\n";
   ret += StringPrintf("Number of parameter blocks: %d\n", NumParameterBlocks());
   ret += StringPrintf("Number of parameters: %d\n", NumParameters());
   ret += "Parameters:\n";
diff --git a/internal/ceres/program.h b/internal/ceres/program.h
index ca29d31..343cf8e 100644
--- a/internal/ceres/program.h
+++ b/internal/ceres/program.h
@@ -72,9 +72,9 @@
   // Serialize to/from the program and update states.
   //
   // NOTE: Setting the state of a parameter block can trigger the
-  // computation of the Jacobian of its local parameterization. If
-  // this computation fails for some reason, then this method returns
-  // false and the state of the parameter blocks cannot be trusted.
+  // computation of the Jacobian of its manifold. If this computation fails for
+  // some reason, then this method returns false and the state of the parameter
+  // blocks cannot be trusted.
   bool StateVectorToParameterBlocks(const double* state);
   void ParameterBlocksToStateVector(double* state) const;
 
@@ -82,8 +82,8 @@
   void CopyParameterBlockStateToUserState();
 
   // Set the parameter block pointers to the user pointers. Since this
-  // runs parameter block set state internally, which may call local
-  // parameterizations, this can fail. False is returned on failure.
+  // runs parameter block set state internally, which may call manifold, this
+  // can fail. False is returned on failure.
   bool SetParameterBlockStatePtrsToUserStatePtrs();
 
   // Update a state vector for the program given a delta.
diff --git a/internal/ceres/program_evaluator.h b/internal/ceres/program_evaluator.h
index 36c9c64..4ab557f 100644
--- a/internal/ceres/program_evaluator.h
+++ b/internal/ceres/program_evaluator.h
@@ -250,7 +250,7 @@
               MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
                   block_jacobians[j],
                   num_residuals,
-                  parameter_block->LocalSize(),
+                  parameter_block->TangentSize(),
                   block_residuals,
                   scratch->gradient.get() + parameter_block->delta_offset());
             }
@@ -320,7 +320,7 @@
 
     double cost;
     std::unique_ptr<double[]> residual_block_evaluate_scratch;
-    // The gradient in the local parameterization.
+    // The gradient on the manifold.
     std::unique_ptr<double[]> gradient;
     // Enough space to store the residual for the largest residual block.
     std::unique_ptr<double[]> residual_block_residuals;
diff --git a/internal/ceres/residual_block.cc b/internal/ceres/residual_block.cc
index 067c9ef..cd408f2 100644
--- a/internal/ceres/residual_block.cc
+++ b/internal/ceres/residual_block.cc
@@ -39,8 +39,8 @@
 #include "ceres/cost_function.h"
 #include "ceres/internal/eigen.h"
 #include "ceres/internal/fixed_array.h"
-#include "ceres/local_parameterization.h"
 #include "ceres/loss_function.h"
+#include "ceres/manifold.h"
 #include "ceres/parameter_block.h"
 #include "ceres/residual_block_utils.h"
 #include "ceres/small_blas.h"
@@ -87,7 +87,7 @@
     for (int i = 0; i < num_parameter_blocks; ++i) {
       const ParameterBlock* parameter_block = parameter_blocks_[i];
       if (jacobians[i] != nullptr &&
-          parameter_block->LocalParameterizationJacobian() != nullptr) {
+          parameter_block->PlusJacobian() != nullptr) {
         global_jacobians[i] = scratch;
         scratch += num_residuals * parameter_block->Size();
       } else {
@@ -132,27 +132,27 @@
 
   double squared_norm = VectorRef(residuals, num_residuals).squaredNorm();
 
-  // Update the jacobians with the local parameterizations.
+  // Update the plus_jacobian for the manifolds.
   if (jacobians != nullptr) {
     for (int i = 0; i < num_parameter_blocks; ++i) {
       if (jacobians[i] != nullptr) {
         const ParameterBlock* parameter_block = parameter_blocks_[i];
 
-        // Apply local reparameterization to the jacobians.
-        if (parameter_block->LocalParameterizationJacobian() != nullptr) {
+        // Apply the Manifold::PlusJacobian to the ambient jacobians.
+        if (parameter_block->PlusJacobian() != nullptr) {
           // jacobians[i] = global_jacobians[i] * global_to_local_jacobian.
           MatrixMatrixMultiply<Dynamic, Dynamic, Dynamic, Dynamic, 0>(
               global_jacobians[i],
               num_residuals,
               parameter_block->Size(),
-              parameter_block->LocalParameterizationJacobian(),
+              parameter_block->PlusJacobian(),
               parameter_block->Size(),
-              parameter_block->LocalSize(),
+              parameter_block->TangentSize(),
               jacobians[i],
               0,
               0,
               num_residuals,
-              parameter_block->LocalSize());
+              parameter_block->TangentSize());
         }
       }
     }
@@ -183,7 +183,7 @@
 
         // Correct the jacobians for the loss function.
         correct.CorrectJacobian(num_residuals,
-                                parameter_block->LocalSize(),
+                                parameter_block->TangentSize(),
                                 residuals,
                                 jacobians[i]);
       }
@@ -199,16 +199,16 @@
 
 int ResidualBlock::NumScratchDoublesForEvaluate() const {
   // Compute the amount of scratch space needed to store the full-sized
-  // jacobians. For parameters that have no local parameterization  no storage
-  // is needed and the passed-in jacobian array is used directly. Also include
-  // space to store the residuals, which is needed for cost-only evaluations.
-  // This is slightly pessimistic, since both won't be needed all the time, but
-  // the amount of excess should not cause problems for the caller.
+  // jacobians. For parameters that have no manifold no storage is needed and
+  // the passed-in jacobian array is used directly. Also include space to store
+  // the residuals, which is needed for cost-only evaluations.  This is slightly
+  // pessimistic, since both won't be needed all the time, but the amount of
+  // excess should not cause problems for the caller.
   int num_parameters = NumParameterBlocks();
   int scratch_doubles = 1;
   for (int i = 0; i < num_parameters; ++i) {
     const ParameterBlock* parameter_block = parameter_blocks_[i];
-    if (parameter_block->LocalParameterizationJacobian() != nullptr) {
+    if (parameter_block->PlusJacobian() != nullptr) {
       scratch_doubles += parameter_block->Size();
     }
   }
diff --git a/internal/ceres/residual_block.h b/internal/ceres/residual_block.h
index f28fd42..888a44a 100644
--- a/internal/ceres/residual_block.h
+++ b/internal/ceres/residual_block.h
@@ -92,10 +92,10 @@
   // false, the caller should expect the output memory locations to have
   // been modified.
   //
-  // The returned cost and jacobians have had robustification and local
-  // parameterizations applied already; for example, the jacobian for a
-  // 4-dimensional quaternion parameter using the "QuaternionParameterization"
-  // is num_residuals by 3 instead of num_residuals by 4.
+  // The returned cost and jacobians have had robustification and manifold
+  // projection applied already; for example, the jacobian for a 4-dimensional
+  // quaternion parameter using the "Quaternion" manifold is num_residuals by 3
+  // instead of num_residuals by 4.
   //
   // apply_loss_function as the name implies allows the user to switch
   // the application of the loss function on and off.
diff --git a/internal/ceres/residual_block_test.cc b/internal/ceres/residual_block_test.cc
index 3c05f48..d587058 100644
--- a/internal/ceres/residual_block_test.cc
+++ b/internal/ceres/residual_block_test.cc
@@ -33,7 +33,7 @@
 #include <cstdint>
 
 #include "ceres/internal/eigen.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 #include "ceres/parameter_block.h"
 #include "ceres/sized_cost_function.h"
 #include "gtest/gtest.h"
@@ -75,7 +75,7 @@
   }
 };
 
-TEST(ResidualBlock, EvaluteWithNoLossFunctionOrLocalParameterizations) {
+TEST(ResidualBlock, EvaluteWithNoLossFunctionOrManifolds) {
   double scratch[64];
 
   // Prepare the parameter blocks.
@@ -178,10 +178,10 @@
     if (jacobians) {
       for (int k = 0; k < 3; ++k) {
         // The jacobians here are full sized, but they are transformed in the
-        // evaluator into the "local" jacobian. In the tests, the "subset
-        // constant" parameterization is used, which should pick out columns
-        // from these jacobians. Put values in the jacobian that make this
-        // obvious; in particular, make the jacobians like this:
+        // evaluator into the "local" jacobian. In the tests, the
+        // "SubsetManifold" is used, which should pick out columns from these
+        // jacobians. Put values in the jacobian that make this obvious; in
+        // particular, make the jacobians like this:
         //
         //   0 1 2 3 4 ...
         //   0 1 2 3 4 ...
@@ -200,7 +200,7 @@
   }
 };
 
-TEST(ResidualBlock, EvaluteWithLocalParameterizations) {
+TEST(ResidualBlock, EvaluteWithManifolds) {
   double scratch[64];
 
   // Prepare the parameter blocks.
@@ -221,14 +221,14 @@
   // Make x have the first component fixed.
   vector<int> x_fixed;
   x_fixed.push_back(0);
-  SubsetParameterization x_parameterization(2, x_fixed);
-  x.SetParameterization(&x_parameterization);
+  SubsetManifold x_manifold(2, x_fixed);
+  x.SetManifold(&x_manifold);
 
   // Make z have the last and last component fixed.
   vector<int> z_fixed;
   z_fixed.push_back(2);
-  SubsetParameterization z_parameterization(4, z_fixed);
-  z.SetParameterization(&z_parameterization);
+  SubsetManifold z_manifold(4, z_fixed);
+  z.SetManifold(&z_manifold);
 
   LocallyParameterizedCostFunction cost_function;
 
diff --git a/internal/ceres/scratch_evaluate_preparer.cc b/internal/ceres/scratch_evaluate_preparer.cc
index 9905b22..d8fee30 100644
--- a/internal/ceres/scratch_evaluate_preparer.cc
+++ b/internal/ceres/scratch_evaluate_preparer.cc
@@ -67,7 +67,7 @@
       jacobians[j] = NULL;
     } else {
       jacobians[j] = jacobian_block_cursor;
-      jacobian_block_cursor += num_residuals * parameter_block->LocalSize();
+      jacobian_block_cursor += num_residuals * parameter_block->TangentSize();
     }
   }
 }
diff --git a/internal/ceres/solver_test.cc b/internal/ceres/solver_test.cc
index c4823be..98c38fd 100644
--- a/internal/ceres/solver_test.cc
+++ b/internal/ceres/solver_test.cc
@@ -38,6 +38,7 @@
 #include "ceres/autodiff_cost_function.h"
 #include "ceres/evaluation_callback.h"
 #include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
 #include "ceres/problem.h"
 #include "ceres/problem_impl.h"
 #include "ceres/sized_cost_function.h"
@@ -532,5 +533,25 @@
   EXPECT_EQ(y, 1.0);
 }
 
+TEST(Solver, ZeroSizedManifoldHoldsParameterBlockConstant) {
+  double x = 0.0;
+  double y = 1.0;
+  Problem problem;
+  problem.AddResidualBlock(LinearCostFunction::Create(), nullptr, &x, &y);
+  problem.SetManifold(&y, new SubsetManifold(1, {0}));
+  EXPECT_TRUE(problem.IsParameterBlockConstant(&y));
+
+  Solver::Options options;
+  options.function_tolerance = 0.0;
+  options.gradient_tolerance = 0.0;
+  options.parameter_tolerance = 0.0;
+  Solver::Summary summary;
+  Solve(options, &problem, &summary);
+
+  EXPECT_EQ(summary.termination_type, CONVERGENCE);
+  EXPECT_NEAR(x, 10.0, 1e-7);
+  EXPECT_EQ(y, 1.0);
+}
+
 }  // namespace internal
 }  // namespace ceres
diff --git a/internal/ceres/trust_region_minimizer.cc b/internal/ceres/trust_region_minimizer.cc
index bcf05b3..d848795 100644
--- a/internal/ceres/trust_region_minimizer.cc
+++ b/internal/ceres/trust_region_minimizer.cc
@@ -750,14 +750,12 @@
 // Compute candidate_x_ = Plus(x_, delta_)
 // Evaluate the cost of candidate_x_ as candidate_cost_.
 //
-// Failure to compute the step or the cost mean that candidate_cost_
-// is set to std::numeric_limits<double>::max(). Unlike
-// EvaluateGradientAndJacobian, failure in this function is not fatal
-// as we are only computing and evaluating a candidate point, and if
-// for some reason we are unable to evaluate it, we consider it to be
-// a point with very high cost. This allows the user to deal with edge
-// cases/constraints as part of the LocalParameterization and
-// CostFunction objects.
+// Failure to compute the step or the cost mean that candidate_cost_ is set to
+// std::numeric_limits<double>::max(). Unlike EvaluateGradientAndJacobian,
+// failure in this function is not fatal as we are only computing and evaluating
+// a candidate point, and if for some reason we are unable to evaluate it, we
+// consider it to be a point with very high cost. This allows the user to deal
+// with edge cases/constraints as part of the Manifold and CostFunction objects.
 void TrustRegionMinimizer::ComputeCandidatePointAndEvaluateCost() {
   if (!evaluator_->Plus(x_.data(), delta_.data(), candidate_x_.data())) {
     if (is_not_silent_) {
diff --git a/internal/ceres/trust_region_minimizer_test.cc b/internal/ceres/trust_region_minimizer_test.cc
index a1c48b6..cad30eb 100644
--- a/internal/ceres/trust_region_minimizer_test.cc
+++ b/internal/ceres/trust_region_minimizer_test.cc
@@ -55,7 +55,7 @@
 // Templated Evaluator for Powell's function. The template parameters
 // indicate which of the four variables/columns of the jacobian are
 // active. This is equivalent to constructing a problem and using the
-// SubsetLocalParameterization. This allows us to test the support for
+// SubsetManifold. This allows us to test the support for
 // the Evaluator::Plus operation besides checking for the basic
 // performance of the trust region algorithm.
 template <bool col1, bool col2, bool col3, bool col4>