Fix typos in doc and errors in the demo code.

Change-Id: I237402958ed8747ae438643132fcab90113ac27d
diff --git a/cmake/CeresCompileOptionsToComponents.cmake b/cmake/CeresCompileOptionsToComponents.cmake
index e2e8ec8..ccf0fa2 100644
--- a/cmake/CeresCompileOptionsToComponents.cmake
+++ b/cmake/CeresCompileOptionsToComponents.cmake
@@ -63,7 +63,7 @@
 endmacro()
 
 # Convert the Ceres compile options specified by: CURRENT_CERES_COMPILE_OPTIONS
-# into the correponding list of Ceres components (names), which may be used in:
+# into the corresponding list of Ceres components (names), which may be used in:
 # find_package(Ceres COMPONENTS <XXX>).
 function(ceres_compile_options_to_components CURRENT_CERES_COMPILE_OPTIONS CERES_COMPONENTS_VAR)
   # To enable users to specify that they want *a* sparse linear algebra backend
diff --git a/docs/source/automatic_derivatives.rst b/docs/source/automatic_derivatives.rst
index 1251814..0c48c80 100644
--- a/docs/source/automatic_derivatives.rst
+++ b/docs/source/automatic_derivatives.rst
@@ -45,7 +45,7 @@
 when defining the functor for use with automatic differentiation is
 the signature of the ``operator()``.
 
-In the case of numeric differentition it was
+In the case of numeric differentiation it was
 
 .. code-block:: c++
 
@@ -152,7 +152,7 @@
 .. math::
    x = a + \mathbf{v}.
 
-where the :math:`\epsilon_i`'s are implict. Then, using the same
+where the :math:`\epsilon_i`'s are implicit. Then, using the same
 Taylor series expansion used above, we can see that:
 
 .. math::
diff --git a/docs/source/features.rst b/docs/source/features.rst
index 579875e..e71bd39 100644
--- a/docs/source/features.rst
+++ b/docs/source/features.rst
@@ -33,7 +33,7 @@
     space by specifying a :class:`LocalParameterization` object.
 
 * **Solver Choice** Depending on the size, sparsity structure, time &
-  memory budgets, and solution quality requiremnts, different
+  memory budgets, and solution quality requirements, different
   optimization algorithms will suit different needs. To this end,
   Ceres Solver comes with a variety of optimization algorithms:
 
diff --git a/docs/source/installation.rst b/docs/source/installation.rst
index 781cd09..b3dfb50 100644
--- a/docs/source/installation.rst
+++ b/docs/source/installation.rst
@@ -1118,7 +1118,7 @@
 search paths.  In which case, no CMake errors will occur, but ``Bar``
 will not link properly, as it does not have the required public link
 dependencies of Ceres, which are stored in the imported target
-defintion.
+definition.
 
 The solution to this is for ``Foo`` (i.e., the project that uses
 Ceres) to invoke ``find_package(Ceres)`` in ``FooConfig.cmake``, thus
diff --git a/docs/source/interfacing_with_autodiff.rst b/docs/source/interfacing_with_autodiff.rst
index 707c071..b79ed45 100644
--- a/docs/source/interfacing_with_autodiff.rst
+++ b/docs/source/interfacing_with_autodiff.rst
@@ -97,7 +97,7 @@
    :class:`CostFunctionToFunctor`. The resulting object is a functor
    with a templated :code:`operator()` method, which pipes the
    Jacobian computed by :class:`NumericDiffCostFunction` into the
-   approproate :code:`Jet` objects.
+   appropriate :code:`Jet` objects.
 
 An implementation of the above three steps looks as follows:
 
@@ -168,7 +168,7 @@
    :class:`CostFunctionToFunctor`. The resulting object is a functor
    with a templated :code:`operator()` method, which pipes the
    Jacobian computed by :class:`NumericDiffCostFunction` into the
-   approproate :code:`Jet` objects.
+   appropriate :code:`Jet` objects.
 
 The resulting code will look as follows:
 
diff --git a/docs/source/modeling_faqs.rst b/docs/source/modeling_faqs.rst
index ee4e62c..a0c8f2f 100644
--- a/docs/source/modeling_faqs.rst
+++ b/docs/source/modeling_faqs.rst
@@ -43,7 +43,7 @@
    four dimensional parameterization of the space of three dimensional
    rotations :math:`SO(3)`.  However, the :math:`SO(3)` is a three
    dimensional set, and so is the tangent space of a
-   Quaternion. Therefore, it is sometimes (not always) benefecial to
+   Quaternion. Therefore, it is sometimes (not always) beneficial to
    associate a local parameterization with parameter blocks
    representing a Quaternion. Assuming that the order of entries in
    your parameter block is :math:`w,x,y,z`, you can use
diff --git a/docs/source/nnls_covariance.rst b/docs/source/nnls_covariance.rst
index 60f15a0..9c6cea8 100644
--- a/docs/source/nnls_covariance.rst
+++ b/docs/source/nnls_covariance.rst
@@ -67,7 +67,7 @@
 ================
 
 In structure from motion (3D reconstruction) problems, the
-reconstruction is ambiguous upto a similarity transform. This is
+reconstruction is ambiguous up to a similarity transform. This is
 known as a *Gauge Ambiguity*. Handling Gauges correctly requires the
 use of SVD or custom inversion algorithms. For small problems the
 user can use the dense algorithm. For more details see the work of
diff --git a/docs/source/nnls_tutorial.rst b/docs/source/nnls_tutorial.rst
index 3c3fc1d..6546ee7 100644
--- a/docs/source/nnls_tutorial.rst
+++ b/docs/source/nnls_tutorial.rst
@@ -621,7 +621,7 @@
 
 Each residual in a BAL problem depends on a three dimensional point
 and a nine parameter camera. The nine parameters defining the camera
-are: three for rotation as a Rodriques' axis-angle vector, three
+are: three for rotation as a Rodrigues' axis-angle vector, three
 for translation, one for focal length and two for radial distortion.
 The details of this camera model can be found the `Bundler homepage
 <http://phototour.cs.washington.edu/bundler/>`_ and the `BAL homepage
diff --git a/docs/source/numerical_derivatives.rst b/docs/source/numerical_derivatives.rst
index 9edc008..57b46bf 100644
--- a/docs/source/numerical_derivatives.rst
+++ b/docs/source/numerical_derivatives.rst
@@ -248,7 +248,7 @@
    \end{align}
 
 The key thing to note here is that the terms :math:`K_2, K_4, ...`
-are indepdendent of :math:`h` and only depend on :math:`x`.
+are independent of :math:`h` and only depend on :math:`x`.
 
 Let us now define:
 
@@ -380,7 +380,7 @@
 ===============
 
 Numeric differentiation should be used when you cannot compute the
-derivatives either analytically or using automatic differention. This
+derivatives either analytically or using automatic differentiation. This
 is usually the case when you are calling an external library or
 function whose analytic form you do not know or even if you do, you
 are not in a position to re-write it in a manner required to use
diff --git a/docs/source/users.rst b/docs/source/users.rst
index 42f66b9..b4f90fa 100644
--- a/docs/source/users.rst
+++ b/docs/source/users.rst
@@ -53,7 +53,7 @@
   beamforming engine, called ASASIN , for estimating platform
   kinematics.
 
-* `Colmap <https://github.com/colmap/colmap>`_ is a an open source
+* `Colmap <https://github.com/colmap/colmap>`_ is an open source
   structure from motion library that makes heavy use of Ceres for
   bundle adjustment with support for many camera models and for other
   non-linear least-squares problems (relative, absolute pose
diff --git a/docs/source/version_history.rst b/docs/source/version_history.rst
index fe4339c..0bef4ad 100644
--- a/docs/source/version_history.rst
+++ b/docs/source/version_history.rst
@@ -128,7 +128,7 @@
 #. Use target_compile_features() to specify C++11 requirement if
    available. (Alex Stewart)
 #. Update docs: .netrc --> .gitcookies (Keir Mierle)
-#. Fix implicit precission loss warning on 64-bit archs (Ricardo
+#. Fix implicit precision loss warning on 64-bit archs (Ricardo
    Sanchez-Saez)
 #. Optionally use exported Eigen CMake configuration if
    available. (Alex Stewart)
@@ -659,7 +659,7 @@
    ``GRADIENT_TOLERANCE`` and ``PARAMETER_TOLERANCE`` have all been
    replaced by ``CONVERGENCE``.
 
-   ``NUMERICAL_FAILURE`` has been replaed by ``FAILURE``.
+   ``NUMERICAL_FAILURE`` has been replaced by ``FAILURE``.
 
    ``USER_ABORT`` has been renamed to ``USER_FAILURE``.
 
@@ -724,7 +724,7 @@
    the other #._FOUND definitions. (Andreas Franek)
 #. Variety of bug fixes and cleanups to the ``CMake`` build system
    (Alex Stewart)
-#. Removed fictious shared library target from the NDK build.
+#. Removed fictitious shared library target from the NDK build.
 #. Solver::Options now uses ``shared_ptr`` to handle ownership of
    ``Solver::Options::linear_solver_ordering`` and
    ``Solver::Options::inner_iteration_ordering``. As a consequence the
@@ -747,7 +747,7 @@
    residuals just like ``AutoDiffCostFunction``.
 #. ``Problem`` exposes more of its structure in its API.
 #. Faster automatic differentiation (Tim Langlois)
-#. Added the commonly occuring ``2_d_d`` template specialization for
+#. Added the commonly occurring ``2_d_d`` template specialization for
    the Schur Eliminator.
 #. Faster ``ITERATIVE_SCHUR`` solver using template specializations.
 #. Faster ``SCHUR_JACOBI`` preconditioner construction.
@@ -853,7 +853,7 @@
 #. Minor errors in documentation (Pablo Speciale)
 #. Updated depend.cmake to follow CMake IF convention. (Joydeep
    Biswas)
-#. Stablize the schur ordering algorithm.
+#. Stabilize the schur ordering algorithm.
 #. Update license header in split.h.
 #. Enabling -O4 (link-time optimization) only if compiler/linker
    support it. (Alex Stewart)
@@ -1065,7 +1065,7 @@
 #. Lots of minor code and lint fixes. (William Rucklidge)
 #. Fixed a bug in ``solver_impl.cc`` residual evaluation. (Markus
    Moll)
-#. Fixed varidic evaluation bug in ``AutoDiff``.
+#. Fixed variadic evaluation bug in ``AutoDiff``.
 #. Fixed ``SolverImpl`` tests.
 #. Fixed a bug in ``DenseSparseMatrix::ToDenseMatrix()``.
 #. Fixed an initialization bug in ``ProgramEvaluator``.
@@ -1325,7 +1325,7 @@
 
 #. Fixed integer overflow bug in ``block_random_access_sparse_matrix.cc``.
 #. Renamed some macros to prevent name conflicts.
-#. Fixed incorrent input to ``StateUpdatingCallback``.
+#. Fixed incorrect input to ``StateUpdatingCallback``.
 #. Fixes to AutoDiff tests.
 #. Various internal cleanups.
 
diff --git a/examples/libmv_homography.cc b/examples/libmv_homography.cc
index 6e74fcd..fe647da 100644
--- a/examples/libmv_homography.cc
+++ b/examples/libmv_homography.cc
@@ -51,7 +51,7 @@
 // This file demonstrates solving for a homography between two sets of points.
 // A homography describes a transformation between a sets of points on a plane,
 // perspectively projected into two images. The first step is to solve a
-// homogeneous system of equations via singular value decompposition, giving an
+// homogeneous system of equations via singular value decomposition, giving an
 // algebraic solution for the homography, then solving for a final solution by
 // minimizing the symmetric transfer error in image space with Ceres (called the
 // Gold Standard Solution in "Multiple View Geometry"). The routines are based on
@@ -105,7 +105,7 @@
 // forward_error = D(H * x1, x2)
 // backward_error = D(H^-1 * x2, x1)
 //
-// Templated to be used with autodifferenciation.
+// Templated to be used with autodifferentiation.
 template <typename T>
 void SymmetricGeometricDistanceTerms(const Eigen::Matrix<T, 3, 3> &H,
                                      const Eigen::Matrix<T, 2, 1> &x1,
diff --git a/include/ceres/autodiff_cost_function.h b/include/ceres/autodiff_cost_function.h
index 60946fd..23ed456 100644
--- a/include/ceres/autodiff_cost_function.h
+++ b/include/ceres/autodiff_cost_function.h
@@ -115,7 +115,7 @@
 // of each of them.
 //
 // WARNING #1: Since the functor will get instantiated with different types for
-// T, you must to convert from other numeric types to T before mixing
+// T, you must convert from other numeric types to T before mixing
 // computations with other variables of type T. In the example above, this is
 // seen where instead of using k_ directly, k_ is wrapped with T(k_).
 //
diff --git a/include/ceres/cost_function_to_functor.h b/include/ceres/cost_function_to_functor.h
index bba67a4..6ab5bae 100644
--- a/include/ceres/cost_function_to_functor.h
+++ b/include/ceres/cost_function_to_functor.h
@@ -46,8 +46,7 @@
 // is a cost function that implements the projection of a point in its
 // local coordinate system onto its image plane and subtracts it from
 // the observed point projection. It can compute its residual and
-// either via analytic or numerical differentiation can compute its
-// jacobians.
+// jacobians either via analytic or numerical differentiation.
 //
 // Now we would like to compose the action of this CostFunction with
 // the action of camera extrinsics, i.e., rotation and
diff --git a/include/ceres/covariance.h b/include/ceres/covariance.h
index 0b9f096..da9f525 100644
--- a/include/ceres/covariance.h
+++ b/include/ceres/covariance.h
@@ -60,7 +60,7 @@
 // Background
 // ==========
 // One way to assess the quality of the solution returned by a
-// non-linear least squares solve is to analyze the covariance of the
+// non-linear least squares solver is to analyze the covariance of the
 // solution.
 //
 // Let us consider the non-linear regression problem
@@ -158,7 +158,7 @@
 // Gauge Invariance
 // ----------------
 // In structure from motion (3D reconstruction) problems, the
-// reconstruction is ambiguous upto a similarity transform. This is
+// reconstruction is ambiguous up to a similarity transform. This is
 // known as a Gauge Ambiguity. Handling Gauges correctly requires the
 // use of SVD or custom inversion algorithms. For small problems the
 // user can use the dense algorithm. For more details see
diff --git a/include/ceres/dynamic_autodiff_cost_function.h b/include/ceres/dynamic_autodiff_cost_function.h
index 2d66f3f..1bfb7a5 100644
--- a/include/ceres/dynamic_autodiff_cost_function.h
+++ b/include/ceres/dynamic_autodiff_cost_function.h
@@ -57,7 +57,7 @@
 //     bool operator()(T const* const* parameters, T* residuals) const {
 //       // Use parameters[i] to access the i'th parameter block.
 //     }
-//   }
+//   };
 //
 // Since the sizing of the parameters is done at runtime, you must
 // also specify the sizes after creating the dynamic autodiff cost
@@ -103,7 +103,7 @@
     // depends on.
     //
     // To work around this issue, the solution here is to evaluate the
-    // jacobians in a series of passes, each one computing Stripe *
+    // jacobians in a series of passes, each one computing Stride *
     // num_residuals() derivatives. This is done with small, fixed-size jets.
     const int num_parameter_blocks =
         static_cast<int>(parameter_block_sizes().size());
diff --git a/include/ceres/gradient_checker.h b/include/ceres/gradient_checker.h
index fbd018d..e23df76 100644
--- a/include/ceres/gradient_checker.h
+++ b/include/ceres/gradient_checker.h
@@ -69,7 +69,7 @@
   // parameterizations.
   //
   // function: The cost function to probe.
-  // local_parameterization: A vector of local parameterizations for each
+  // local_parameterizations: A vector of local parameterizations for each
   // parameter. May be NULL or contain NULL pointers to indicate that the
   // respective parameter does not have a local parameterization.
   // options: Options to use for numerical differentiation.
@@ -99,10 +99,10 @@
     // Derivatives as computed by the cost function in local space.
     std::vector<Matrix> local_jacobians;
 
-    // Derivatives as computed by nuerical differentiation in local space.
+    // Derivatives as computed by numerical differentiation in local space.
     std::vector<Matrix> numeric_jacobians;
 
-    // Derivatives as computed by nuerical differentiation in local space.
+    // Derivatives as computed by numerical differentiation in local space.
     std::vector<Matrix> local_numeric_jacobians;
 
     // Contains the maximum relative error found in the local Jacobians.
diff --git a/include/ceres/gradient_problem_solver.h b/include/ceres/gradient_problem_solver.h
index ef3bf42..1831d8d 100644
--- a/include/ceres/gradient_problem_solver.h
+++ b/include/ceres/gradient_problem_solver.h
@@ -300,7 +300,7 @@
     // to compute the next candidate step size as part of a line search.
     double line_search_polynomial_minimization_time_in_seconds = -1.0;
 
-    // Number of parameters in the probem.
+    // Number of parameters in the problem.
     int num_parameters = -1;
 
     // Dimension of the tangent space of the problem.
@@ -329,7 +329,7 @@
   // Once a least squares problem has been built, this function takes
   // the problem and optimizes it based on the values of the options
   // parameters. Upon return, a detailed summary of the work performed
-  // by the preprocessor, the non-linear minmizer and the linear
+  // by the preprocessor, the non-linear minimizer and the linear
   // solver are reported in the summary object.
   virtual void Solve(const GradientProblemSolver::Options& options,
                      const GradientProblem& problem,
diff --git a/include/ceres/loss_function.h b/include/ceres/loss_function.h
index 078771e..97a70b6 100644
--- a/include/ceres/loss_function.h
+++ b/include/ceres/loss_function.h
@@ -57,7 +57,7 @@
 // anything special (i.e. if we used a basic quadratic loss), the
 // residual for the erroneous measurement will result in extreme error
 // due to the quadratic nature of squared loss. This results in the
-// entire solution getting pulled away from the optimimum to reduce
+// entire solution getting pulled away from the optimum to reduce
 // the large error that would otherwise be attributed to the wrong
 // measurement.
 //
diff --git a/include/ceres/numeric_diff_cost_function.h b/include/ceres/numeric_diff_cost_function.h
index 7cab267..6ec86fa 100644
--- a/include/ceres/numeric_diff_cost_function.h
+++ b/include/ceres/numeric_diff_cost_function.h
@@ -56,12 +56,12 @@
 // define the object
 //
 //   class MyScalarCostFunctor {
-//     MyScalarCostFunctor(double k): k_(k) {}
+//     explicit MyScalarCostFunctor(double k): k_(k) {}
 //
 //     bool operator()(const double* const x,
 //                     const double* const y,
 //                     double* residuals) const {
-//       residuals[0] = k_ - x[0] * y[0] + x[1] * y[1];
+//       residuals[0] = k_ - x[0] * y[0] - x[1] * y[1];
 //       return true;
 //     }
 //
@@ -223,7 +223,7 @@
         (N0 > 0) + (N1 > 0) + (N2 > 0) + (N3 > 0) + (N4 > 0) +
         (N5 > 0) + (N6 > 0) + (N7 > 0) + (N8 > 0) + (N9 > 0);
 
-    // Get the function value (residuals) at the the point to evaluate.
+    // Get the function value (residuals) at the point to evaluate.
     if (!internal::EvaluateImpl<CostFunctor,
                                 N0, N1, N2, N3, N4, N5, N6, N7, N8, N9>(
                                     functor_.get(),
diff --git a/include/ceres/problem.h b/include/ceres/problem.h
index 43b27c8..e220e66 100644
--- a/include/ceres/problem.h
+++ b/include/ceres/problem.h
@@ -113,8 +113,8 @@
 //
 //   Problem problem;
 //
-//   problem.AddResidualBlock(new MyUnaryCostFunction(...), x1);
-//   problem.AddResidualBlock(new MyBinaryCostFunction(...), x2, x3);
+//   problem.AddResidualBlock(new MyUnaryCostFunction(...), NULL, x1);
+//   problem.AddResidualBlock(new MyBinaryCostFunction(...), NULL, x2, x3);
 //
 // Please see cost_function.h for details of the CostFunction object.
 class CERES_EXPORT Problem {
@@ -136,13 +136,13 @@
     //
     // By default, RemoveParameterBlock() and RemoveResidualBlock() take time
     // proportional to the size of the entire problem.  If you only ever remove
-    // parameters or residuals from the problem occassionally, this might be
+    // parameters or residuals from the problem occasionally, this might be
     // acceptable.  However, if you have memory to spare, enable this option to
     // make RemoveParameterBlock() take time proportional to the number of
     // residual blocks that depend on it, and RemoveResidualBlock() take (on
     // average) constant time.
     //
-    // The increase in memory usage is twofold: an additonal hash set per
+    // The increase in memory usage is twofold: an additional hash set per
     // parameter block containing all the residuals that depend on the parameter
     // block; and a hash set in the problem containing all residuals.
     bool enable_fast_removal = false;
@@ -176,7 +176,7 @@
   ~Problem();
 
   // Add a residual block to the overall cost function. The cost
-  // function carries with it information about the sizes of the
+  // function carries with its information about the sizes of the
   // parameter blocks it expects. The function checks that these match
   // the sizes of the parameter blocks listed in parameter_blocks. The
   // program aborts if a mismatch is detected. loss_function can be
diff --git a/include/ceres/rotation.h b/include/ceres/rotation.h
index d05d190..a0530dd 100644
--- a/include/ceres/rotation.h
+++ b/include/ceres/rotation.h
@@ -89,13 +89,13 @@
 // The value quaternion must be a unit quaternion - it is not normalized first,
 // and angle_axis will be filled with a value whose norm is the angle of
 // rotation in radians, and whose direction is the axis of rotation.
-// The implemention may be used with auto-differentiation up to the first
+// The implementation may be used with auto-differentiation up to the first
 // derivative, higher derivatives may have unexpected results near the origin.
 template<typename T>
 void QuaternionToAngleAxis(const T* quaternion, T* angle_axis);
 
 // Conversions between 3x3 rotation matrix (in column major order) and
-// quaternion rotation representations.  Templated for use with
+// quaternion rotation representations. Templated for use with
 // autodifferentiation.
 template <typename T>
 void RotationMatrixToQuaternion(const T* R, T* quaternion);
@@ -106,7 +106,7 @@
     T* quaternion);
 
 // Conversions between 3x3 rotation matrix (in column major order) and
-// axis-angle rotation representations.  Templated for use with
+// axis-angle rotation representations. Templated for use with
 // autodifferentiation.
 template <typename T>
 void RotationMatrixToAngleAxis(const T* R, T* angle_axis);
diff --git a/include/ceres/solver.h b/include/ceres/solver.h
index bd6172f..83077e2 100644
--- a/include/ceres/solver.h
+++ b/include/ceres/solver.h
@@ -77,7 +77,7 @@
     // exactly or inexactly.
     //
     // 2. The trust region approach approximates the objective
-    // function using using a model function (often a quadratic) over
+    // function using a model function (often a quadratic) over
     // a subset of the search space known as the trust region. If the
     // model function succeeds in minimizing the true objective
     // function the trust region is expanded; conversely, otherwise it
@@ -238,7 +238,7 @@
     // in the value of the objective function.
     //
     // This is because allowing for non-decreasing objective function
-    // values in a princpled manner allows the algorithm to "jump over
+    // values in a principled manner allows the algorithm to "jump over
     // boulders" as the method is not restricted to move into narrow
     // valleys while preserving its convergence properties.
     //
@@ -339,7 +339,7 @@
     // available.
     //
     // This setting affects the DENSE_QR, DENSE_NORMAL_CHOLESKY and
-    // DENSE_SCHUR solvers. For small to moderate sized probem EIGEN
+    // DENSE_SCHUR solvers. For small to moderate sized problem EIGEN
     // is a fine choice but for large problems, an optimized LAPACK +
     // BLAS implementation can make a substantial difference in
     // performance.
@@ -388,7 +388,7 @@
     //
     // Given such an ordering, Ceres ensures that the parameter blocks in
     // the lowest numbered group are eliminated first, and then the
-    // parmeter blocks in the next lowest numbered group and so on. Within
+    // parameter blocks in the next lowest numbered group and so on. Within
     // each group, Ceres is free to order the parameter blocks as it
     // chooses.
     //
@@ -434,7 +434,7 @@
     // ITERATIVE_SCHUR.
     //
     // By default this option is disabled and ITERATIVE_SCHUR
-    // evaluates evaluates matrix-vector products between the Schur
+    // evaluates matrix-vector products between the Schur
     // complement and a vector implicitly by exploiting the algebraic
     // expression for the Schur complement.
     //
@@ -492,7 +492,7 @@
     // TODO(sameeragarwal): Further expand the documentation for the
     // following two options.
 
-    // NOTE1: EXPERIMETAL FEATURE, UNDER DEVELOPMENT, USE AT YOUR OWN RISK.
+    // NOTE1: EXPERIMENTAL FEATURE, UNDER DEVELOPMENT, USE AT YOUR OWN RISK.
     //
     // If use_mixed_precision_solves is true, the Gauss-Newton matrix
     // is computed in double precision, but its factorization is
@@ -539,7 +539,7 @@
     // known as Wiberg's algorithm.
     //
     // Ruhe & Wedin (Algorithms for Separable Nonlinear Least Squares
-    // Problems, SIAM Reviews, 22(3), 1980) present an analyis of
+    // Problems, SIAM Reviews, 22(3), 1980) present an analysis of
     // various algorithms for solving separable non-linear least
     // squares problems and refer to "Variable Projection" as
     // Algorithm I in their paper.
@@ -679,7 +679,7 @@
     //
     // The finite differencing is done along each dimension. The
     // reason to use a relative (rather than absolute) step size is
-    // that this way, numeric differentation works for functions where
+    // that this way, numeric differentiation works for functions where
     // the arguments are typically large (e.g. 1e9) and when the
     // values are small (e.g. 1e-5). It is possible to construct
     // "torture cases" which break this finite difference heuristic,
@@ -866,7 +866,7 @@
     // Number of parameter blocks in the problem.
     int num_parameter_blocks = -1;
 
-    // Number of parameters in the probem.
+    // Number of parameters in the problem.
     int num_parameters = -1;
 
     // Dimension of the tangent space of the problem (or the number of
@@ -1035,7 +1035,7 @@
   // Once a least squares problem has been built, this function takes
   // the problem and optimizes it based on the values of the options
   // parameters. Upon return, a detailed summary of the work performed
-  // by the preprocessor, the non-linear minmizer and the linear
+  // by the preprocessor, the non-linear minimizer and the linear
   // solver are reported in the summary object.
   virtual void Solve(const Options& options,
                      Problem* problem,
diff --git a/include/ceres/tiny_solver.h b/include/ceres/tiny_solver.h
index d447665..b6484fe 100644
--- a/include/ceres/tiny_solver.h
+++ b/include/ceres/tiny_solver.h
@@ -38,7 +38,7 @@
 // during solving. This is especially useful when solving many similar problems;
 // for example, inverse pixel distortion for every pixel on a grid.
 //
-// Note: This code has no depedencies beyond Eigen, including on other parts of
+// Note: This code has no dependencies beyond Eigen, including on other parts of
 // Ceres, so it is possible to take this file alone and put it in another
 // project without the rest of Ceres.
 //
@@ -60,7 +60,7 @@
 
 // To use tiny solver, create a class or struct that allows computing the cost
 // function (described below). This is similar to a ceres::CostFunction, but is
-// different to enable statically allocating all memory for the solve
+// different to enable statically allocating all memory for the solver
 // (specifically, enum sizes). Key parts are the Scalar typedef, the enums to
 // describe problem sizes (needed to remove all heap allocations), and the
 // operator() overload to evaluate the cost and (optionally) jacobians.
@@ -75,9 +75,9 @@
 //                     double* residuals,
 //                     double* jacobian) const;
 //
-//     int NumResiduals();  -- Needed if NUM_RESIDUALS == Eigen::Dynamic.
-//     int NumParameters(); -- Needed if NUM_PARAMETERS == Eigen::Dynamic.
-//   }
+//     int NumResiduals() const;  -- Needed if NUM_RESIDUALS == Eigen::Dynamic.
+//     int NumParameters() const; -- Needed if NUM_PARAMETERS == Eigen::Dynamic.
+//   };
 //
 // For operator(), the size of the objects is:
 //
diff --git a/internal/ceres/thread_pool.h b/internal/ceres/thread_pool.h
index 87c58c2..1ebb52e 100644
--- a/internal/ceres/thread_pool.h
+++ b/internal/ceres/thread_pool.h
@@ -42,7 +42,7 @@
 namespace internal {
 
 // A thread-safe thread pool with an unbounded task queue and a resizable number
-// of workers.  The size of the thread pool can be increased by never decreased
+// of workers.  The size of the thread pool can be increased but never decreased
 // in order to support the largest number of threads requested.  The ThreadPool
 // has three states:
 //