blob: 04b677af0571ee7056a20141b04ac44d8b0ae64c [file] [log] [blame]
Sameer Agarwal8ed29a72012-06-07 17:04:25 -07001%!TEX root = ceres-solver.tex
Sameer Agarwald3eaa482012-05-29 23:47:57 -07002\chapter{Hello World!}
3\label{chapter:tutorial:helloworld}
4To get started, let us consider the problem of finding the minimum of the function
5\begin{equation}
6 \frac{1}{2}(10 -x)^2.
7\end{equation}
8This is a trivial problem, whose minimum is easy to see is located at $x = 10$, but it is a good place to start to illustrate the basics of solving a problem with Ceres\footnote{Full working code for this and other examples in this manual can be found in the \texttt{examples} directory. Code for this example can be found in \texttt{examples/quadratic.cc}}.
9
10
11Let us write this problem as a non-linear least squares problem by defining the scalar residual function $f_1(x) = 10 - x$. Then $F(x) = [f_1(x)]$ is a residual vector with exactly one component.
12
13When solving a problem with Ceres, the first thing to do is to define a subclass of \texttt{CostFunction}. It is responsible for computing the value of the residual function and its derivative (also known as the Jacobian) with respect to $x$.
Sameer Agarwald3eaa482012-05-29 23:47:57 -070014\begin{minted}[mathescape]{c++}
15class SimpleCostFunction
16 : public ceres::SizedCostFunction<1 /* number of residuals */,
17 1 /* size of first parameter */> {
18 public:
19 virtual ~SimpleCostFunction() {}
20 virtual bool Evaluate(double const* const* parameters,
21 double* residuals,
22 double** jacobians) const {
23 const double x = parameters[0][0];
24 residuals[0] = 10 - x; // $f(x) = 10 - x$
25 // Compute the Jacobian if asked for.
26 if (jacobians != NULL && jacobians[0] != NULL) {
27 jacobians[0][0] = -1;
28 }
29 return true;
30 }
31};
32\end{minted}
Sameer Agarwald3eaa482012-05-29 23:47:57 -070033\texttt{SimpleCostFunction} is provided with an input array of parameters, an output array for residuals and an optional output array for Jacobians. In our example, there is just one parameter and one residual and this is known at compile time, therefore we can save some code and instead of inheriting from \texttt{CostFunction}, we can instaed inherit from the templated \texttt{SizedCostFunction} class.
34
35
36The \texttt{jacobians} array is optional, \texttt{Evaluate} is expected to check when it is non-null, and if it is the case then fill it with the values of the derivative of the residual function. In this case since the residual function is linear, the Jacobian is constant.
37
38Once we have a way of computing the residual vector, it is now time to construct a Non-linear least squares problem using it and have Ceres solve it.
39\begin{minted}{c++}
40int main(int argc, char** argv) {
41 double x = 5.0;
42 ceres::Problem problem;
43
44 // The problem object takes ownership of the newly allocated
45 // SimpleCostFunction and uses it to optimize the value of x.
46 problem.AddResidualBlock(new SimpleCostFunction, NULL, &x);
47
Sameer Agarwal46d4b1d2012-06-22 09:34:30 -070048 // Run the solver!
49 Solver::Options options;
50 options.max_num_iterations = 10;
Sameer Agarwald3eaa482012-05-29 23:47:57 -070051 options.linear_solver_type = ceres::DENSE_QR;
52 options.minimizer_progress_to_stdout = true;
Sameer Agarwal46d4b1d2012-06-22 09:34:30 -070053 Solver::Summary summary;
54 Solve(options, &problem, &summary);
Sameer Agarwald3eaa482012-05-29 23:47:57 -070055 std::cout << summary.BriefReport() << "\n";
56 std::cout << "x : 5.0 -> " << x << "\n";
57 return 0;
58}
59\end{minted}
60
61Compiling and running this program gives us
62\begin{minted}{bash}
630: f: 1.250000e+01 d: 0.00e+00 g: 5.00e+00 h: 0.00e+00 rho: 0.00e+00 mu: 1.00e-04 li: 0
641: f: 1.249750e-07 d: 1.25e+01 g: 5.00e-04 h: 5.00e+00 rho: 1.00e+00 mu: 3.33e-05 li: 1
652: f: 1.388518e-16 d: 1.25e-07 g: 1.67e-08 h: 5.00e-04 rho: 1.00e+00 mu: 1.11e-05 li: 1
66Ceres Solver Report: Iterations: 2, Initial cost: 1.250000e+01, \
67Final cost: 1.388518e-16, Termination: PARAMETER_TOLERANCE.
68x : 5 -> 10
69\end{minted}
70
Sameer Agarwal46d4b1d2012-06-22 09:34:30 -070071Starting from a $x=5$, the solver in two iterations goes to 10~\footnote{Actually the solver ran for three iterations, and it was by looking at the value returned by the linear solver in the third iteration, it observed that the update to the parameter block was too small and declared convergence. Ceres only prints out the display at the end of an iteration, and terminates as soon as it detects convergence, which is why you only see two iterations here and not three.}. The careful reader will note that this is a linear problem and one linear solve should be enough to get the optimal value. The default configuration of the solver is aimed at non-linear problems, and for reasons of simplicity we did not change it in this example. It is indeed possible to obtain the solution to this problem using Ceres in one iteration. Also note that the solver did get very close to the optimal function value of 0 in the very first iteration. We will discuss these issues in greater detail when we talk about convergence and parameter settings for Ceres.