blob: e0d5f1d637e8fc9109a985d85cc3c7a52b524445 [file] [log] [blame]
%!TEX root = ceres-solver.tex
\chapter{Hello World!}
\label{chapter:tutorial:helloworld}
To get started, let us consider the problem of finding the minimum of the function
\begin{equation}
\frac{1}{2}(10 -x)^2.
\end{equation}
This is a trivial problem, whose minimum is easy to see is located at $x = 10$, but it is a good place to start to illustrate the basics of solving a problem with Ceres\footnote{Full working code for this and other examples in this manual can be found in the \texttt{examples} directory. Code for this example can be found in \texttt{examples/quadratic.cc}}.
Let us write this problem as a non-linear least squares problem by defining the scalar residual function $f_1(x) = 10 - x$. Then $F(x) = [f_1(x)]$ is a residual vector with exactly one component.
When solving a problem with Ceres, the first thing to do is to define a subclass of \texttt{CostFunction}. It is responsible for computing the value of the residual function and its derivative (also known as the Jacobian) with respect to $x$.
\begin{minted}[mathescape]{c++}
class SimpleCostFunction
: public ceres::SizedCostFunction<1 /* number of residuals */,
1 /* size of first parameter */> {
public:
virtual ~SimpleCostFunction() {}
virtual bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const {
const double x = parameters[0][0];
residuals[0] = 10 - x; // $f(x) = 10 - x$
// Compute the Jacobian if asked for.
if (jacobians != NULL && jacobians[0] != NULL) {
jacobians[0][0] = -1;
}
return true;
}
};
\end{minted}
\texttt{SimpleCostFunction} is provided with an input array of parameters, an output array for residuals and an optional output array for Jacobians. In our example, there is just one parameter and one residual and this is known at compile time, therefore we can save some code and instead of inheriting from \texttt{CostFunction}, we can instaed inherit from the templated \texttt{SizedCostFunction} class.
The \texttt{jacobians} array is optional, \texttt{Evaluate} is expected to check when it is non-null, and if it is the case then fill it with the values of the derivative of the residual function. In this case since the residual function is linear, the Jacobian is constant.
Once we have a way of computing the residual vector, it is now time to construct a Non-linear least squares problem using it and have Ceres solve it.
\begin{minted}{c++}
int main(int argc, char** argv) {
double x = 5.0;
ceres::Problem problem;
// The problem object takes ownership of the newly allocated
// SimpleCostFunction and uses it to optimize the value of x.
problem.AddResidualBlock(new SimpleCostFunction, NULL, &x);
// Configure the solver.
ceres::Solver::Options options;
options.max_num_iterations = 2;
// Small, simple problem so we will use the Dense QR
// factorization based solver.
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
std::cout << "x : 5.0 -> " << x << "\n";
return 0;
}
\end{minted}
Compiling and running this program gives us
\begin{minted}{bash}
0: f: 1.250000e+01 d: 0.00e+00 g: 5.00e+00 h: 0.00e+00 rho: 0.00e+00 mu: 1.00e-04 li: 0
1: f: 1.249750e-07 d: 1.25e+01 g: 5.00e-04 h: 5.00e+00 rho: 1.00e+00 mu: 3.33e-05 li: 1
2: f: 1.388518e-16 d: 1.25e-07 g: 1.67e-08 h: 5.00e-04 rho: 1.00e+00 mu: 1.11e-05 li: 1
Ceres Solver Report: Iterations: 2, Initial cost: 1.250000e+01, \
Final cost: 1.388518e-16, Termination: PARAMETER_TOLERANCE.
x : 5 -> 10
\end{minted}
Starting from a $x=5$, the solver in two iterations goes to 10. The careful reader will note that this is a linear problem and one linear solve should be enough to get the optimal value. The default configuration of the solver is aimed at non-linear problems, and for reasons of simplicity we did not change it in this example. It is indeed possible to obtain the solution to this problem using Ceres in one iteration. Also note that the solver did get very close to the optimal function value of 0 in the very first iteration. We will discuss these issues in greater detail when we talk about convergence and parameter settings for Ceres.