Kashif's corrections to the docs
diff --git a/docs/api.tex b/docs/api.tex
index 2c36377..a9f4aef 100644
--- a/docs/api.tex
+++ b/docs/api.tex
@@ -126,7 +126,7 @@
Dimension of y -------------------+
\end{minted}
-In this example, there is usually an instance for each measumerent of k.
+In this example, there is usually an instance for each measurement of k.
In the instantiation above, the template parameters following
\texttt{MyScalarCostFunction}, \texttt{<1, 2, 2>} describe the functor as computing a
@@ -150,7 +150,7 @@
To get a numerically differentiated cost function, define a subclass of
\texttt{CostFunction} such that the \texttt{Evaluate} function ignores the jacobian
parameter. The numeric differentiation wrapper will fill in the jacobians array
- if nececssary by repeatedly calling the \texttt{Evaluate} method with
+ if necessary by repeatedly calling the \texttt{Evaluate} method with
small changes to the appropriate parameters, and computing the slope. For
performance, the numeric differentiation wrapper class is templated on the
concrete cost function, even though it could be implemented only in terms of
@@ -582,7 +582,7 @@
The finite differencing is done along each dimension. The
reason to use a relative (rather than absolute) step size is
- that this way, numeric differentation works for functions where
+ that this way, numeric differentiation works for functions where
the arguments are typically large (e.g. 1e9) and when the
values are small (e.g. 1e-5). It is possible to construct
"torture cases" which break this finite difference heuristic,
diff --git a/docs/theory.tex b/docs/theory.tex
index 4b47479..92a7aa6 100644
--- a/docs/theory.tex
+++ b/docs/theory.tex
@@ -179,7 +179,7 @@
block means that if we were to treat the sparsity structure of the
block matrix $H$ as a graph, then the set of \texttt{e\_block}s is an
independent set in this graph. The larger the number of
-\texttt{e\_block}, the smaller is the size of the Schur complement $S$. Indeed the reason Schur based solvers are so efficient at solving bundle adjustment problems is because the numner of points in a bundle adjustment problem is usually an order of magnitude or two larger than the number of cameras.
+\texttt{e\_block}, the smaller is the size of the Schur complement $S$. Indeed the reason Schur based solvers are so efficient at solving bundle adjustment problems is because the number of points in a bundle adjustment problem is usually an order of magnitude or two larger than the number of cameras.
Thus, the aim of the \texttt{SCHUR} ordering algorithm is to identify the largest independent set in the graph of $H$. Unfortunately this is an NP-Hard problem. But there is a greedy approximation algorithm that performs well~\cite{li2007miqr} and we use it to identify \texttt{e\_block}s in Ceres.