BleamCard: the first augmented reality card. Get your own on Indiegogo!

Subsections


Parameter Estimation

The vast majority of the problems treated in this document reduces to parameter estimation problems. In this section, we give the basic tools and concepts related to such problems. Broadly speaking, the goal of parameter estimation is to estimate (or infer) the unknown parameters of a fixed model in order to fit some noisy measurements. It relies on an analysis of the probability density functions of the error in the measurements.

This section is organized as follows. We first give some general points on parameter estimation. We then give some specializations of the general points.


General Points on Parameter Estimation


Parametric Models of Function

The first thing to do in a parameter estimation problem is to choose a parametric model of function. Note that in this document the expression `parametric model of function' will often be abbreviated to `parametric model' or even `model'. A parametric model is a family of functions that can be described with a finite set of parameters. These parameters are generally grouped in a vector  $ \mathbf{p} \in \mathbb{R}^p$. The parametric model is the following family of functions:

$\displaystyle \{ \mathcal {F}(\textrm{\raisebox{1pt}{\tiny$\bullet$}}; \mathbf{...
...\mathbb{R}^n \rightarrow \mathbb{R}^m \;\vert\; \mathbf{p} \in \mathbb{R}^p \}.$ (3.1)

We generally designate a parametric model with the notation $ \mathcal {F} : \mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}^m$. A standard convention is to note $ \mathcal {F}_{\mathbf{p}}$ a particular element of the parametric model (for a given set of parameters  $ \mathbf{p}$). We do not follow this convention in this document. This stems from the fact that we sometimes want to consider the function  $ \mathcal {F}$ as a function of its natural parameters for a fixed set of parameters  $ \mathbf{p}$ (i.e. the function $ \mathcal {F}(\textrm{\raisebox{1pt}{\tiny $\bullet$}}; \mathbf{p}) : \mathbb{R}^n \rightarrow \mathbb{R}^m$). Some other times we consider  $ \mathcal {F}$ as a function of the parameters for a given set of natural parameters  $ \mathbf{x}$ (i.e. the function $ \mathcal {F}(\mathbf{x} ; \textrm{\raisebox{1pt}{\tiny $\bullet$}}) : \mathbb{R}^p \rightarrow \mathbb{R}^m$).

The choice of the model depends mainly on the hypothesis we are able to make on the data to fit. For instance, if one wanted to study the motion of a ball thrown by a basketball player, a good model may be a polynomial of degree 2. The most used models in this thesis have been detailed in section 2.3.

The goal of parameter estimation is to determine a set of parameters  $ \mathbf{p}$ so that the resulting function  $ \mathcal {F}(\textrm{\raisebox{1pt}{\tiny $\bullet$}}; \mathbf{p})$ is close to the data measurements.

Data Measurements, Errors

From now on and without any loss of generality, we assume that the model is made of scalar-valued functions. The data points, i.e. the measurements, can be seen as noisy instances of a reference function $ \underline{f} : \mathbb{R}^n \rightarrow \mathbb{R}$. Let us note $ \{ \mathbf{x}_i \leftrightarrow y_i \}_{i=1}^d$ the set of the $ d$ measurements with $ \mathbf{x}_i \in \mathbb{R}^n$ and $ y_i \in \mathbb{R}$. We have that:

$\displaystyle y_i = \underline{f}(\mathbf{x}_i) + \varepsilon_i, \quad \forall i \in \llbracket 1, d \rrbracket$ (3.2)

where $ \varepsilon_i$ is a random variable. Estimation techniques relies on an analysis of the errors  $ \varepsilon_i$. In particular, an estimation technique depends on the probability distribution assumed for the random variables  $ \varepsilon_i$.

Probability Density Function (PDF)

Let $ X$ be a continuous random variable. The probability density function $ p$ of the random variable $ X$ is a function from  $ \mathbb{R}$ to $ \mathbb{R}_+$ such that:

$\displaystyle P [ a \leq X \leq b ] = \int_a^b p(x) \mathrm d x,$ (3.3)

where $ P [ a \leq X \leq b ]$ denotes the probability that $ X$ lies in the interval $ [a,b]$. In order to be a PDF, a function $ p$ must satisfy several criteria. First, it must be a positive function. Second, it must be integrable. Third, it must satisfies:

$\displaystyle \int_{-\infty}^{+\infty} p(x) \mathrm dx = 1.$ (3.4)

The PDF of a random variable defines the distribution of that variable. Some common distributions (and what they imply in terms of parameter estimation) will be detailed later in this section.

The cumulative distribution function $ F$ associated to the PDF $ p$ is the function from  $ \mathbb{R}$ to $ \mathbb{R}_+$ defined by:

$\displaystyle F(x) = \int_{-\infty}^x p(x) \mathrm d x.$ (3.5)

Note that since $ p$ is a positive function, $ F$ is an increasing function.

Maximum Likelihood Estimation

Maximum Likelihood Estimation (MLE) is a statistical tool used to find the parameters of a model given a set of data. The general idea behind MLE is to find the `most probable' parameters  $ \mathbf{p}$ given the observations. Let $ \{ \mathbf{x}_i \leftrightarrow y_i \}_{i=1}^d$ be the observations. Let $ e_i$ be the residual3.1 for the $ i$-th data point for a given set of parameters  $ \mathbf{p}$, i.e. $ e_i = y_i - f(\mathbf{x}_i ; \mathbf{p})$. Note that $ e_i$ depends on the parameters  $ \mathbf{p}$ even though this dependency is not explicitly written. We consider the residuals $ e_i$ as random variables. Let us assume that the variables $ e_i$ are independent and identically distributed (i.i.d.). This allows us to say that the joint probability of the variables  $ \{e_i\}_{i=1}^d$ given the model parameters  $ \mathbf{p}$ is:

$\displaystyle P[e_1, \ldots, e_d \vert \mathbf{p}] = \prod_{i=1}^d P[e_i \vert \mathbf{p}],$ (3.6)

where $ P[A \vert B]$ denotes the probability of the event $ A$ given the event $ B$ and $ P[A_1, \ldots, A_n]$ denotes the joint probability of the events  $ A_1, \ldots, A_n$. In terms of probability density function, it means that we have:

$\displaystyle p(e_1, \ldots, e_d ; \mathbf{p}) = \prod_{i=1}^d p(e_i ; \mathbf{p}),$ (3.7)

where $ p$ is the probability density function associated to the distribution of the random variables $ e_i$. The likelihood function  $ \mathcal {L}$ is the joint probability density function considered as a function of the parameters  $ \mathbf{p}$:

$\displaystyle \mathcal {L}(\mathbf{p}) = p(e_1, \ldots, e_d ; \mathbf{p}) = \prod_{i=1}^d p(e_i ; \mathbf{p}).$ (3.8)

MLE amounts to maximize the likelihood of the parameters  $ \mathbf{p}$, i.e.:

$\displaystyle \max_{\mathbf{p}} \mathcal {L}(\mathbf{p}).$ (3.9)

From a practical point of view, it is often more convenient to consider the log-likelihood instead of the likelihood. The log-likelihood  $ \hat{\mathcal {L}}$ is the function defined as:

$\displaystyle \hat{\mathcal {L}} = \ln(\mathcal {L}).$ (3.10)

In this way, equation (3.9), which is a maximization problem with a cost function defined as a product of numerous factors, can easily be turned into a minimization problem with a cost function defined as a simple sum:

$\displaystyle \min_{\mathbf{p}} \hat{\mathcal {L}}(\mathbf{p}) \qquad \Leftrightarrow \qquad \min_{\mathbf{p}} \sum_{i=1}^d - \ln\big(p(e_i ; \mathbf{p})\big).$ (3.11)

MLE will be instantiated to specific distributions later in this section. For now, let us just say that least-squares problems are the natural consequence of MLE with normally-distributed errors.


Maximum A Posteriori Estimation

For MLE, we specified a distribution $ p(e_i ; \mathbf{p})$ for the residuals $ e_i$ given the parameters  $ \mathbf{p}$. For the method of Maximum a Posteriori (MAP), we also specify a prior distribution $ p(\mathbf{p})$ on the parameters  $ \mathbf{p}$ that reflects our knowledge about the parameters independently of any observation (96). As an example of parameter prior, one may assume that the fitted function must be `smooth'. The posterior distribution  $ p(\mathbf{p} ; e_1, \ldots, e_d)$ is then computed using the Bayes rule:

$\displaystyle p(\mathbf{p} ; e_1, \ldots, e_d) = \frac{p(e_1, \ldots, e_d ; \ma...
...p})}{\int p(e_1, \ldots, e_d ; \mathbf{p}) p(\mathbf{p}) \mathrm d \mathbf{p}}.$ (3.12)

The denominator in the right hand side of equation (3.12) acts as a normalizing constant chosen such that $ p(\mathbf{p} ; e_1, \ldots, e_d)$ is an actual probability density function. The MAP estimation method amounts to solve the following maximization problem:

$\displaystyle \max_{\mathbf{p}} p(\mathbf{p} ; e_1, \ldots, e_d).$ (3.13)

Note that MAP is equivalent to MLE if the prior distribution of the parameters  $ \mathbf{p}$ is chosen to be a uniform distribution. In this case, we say that we have an uninformative prior.

Estimator

Definition.

An estimator is a function that maps a set of observations to an estimate of the parameters. In other words, an estimator is the result of an estimation process and of a model applied to a set of data. If $ \mathbf{p}$ are the parameters we wish to estimate, then the estimator of $ \mathbf{p}$ is typically written by adding the `hat' symbol: $ \hat{\mathbf{p}}$. The notion of estimator is independent of any particular model or estimation process. In particular, it may not be formulated as an optimization problem. This is the reason why the generic notation  $ \hat{\mathbf{p}}$ is used. Note that, by abuse of language, $ \hat{\mathbf{p}}$ designates either the estimator itself or the estimated parameters.

Bias.

The bias of an estimator is the average error between estimations $ \hat{\mathbf{p}}$ and the corresponding true parameters  $ \underline{\mathbf{p}}$ (for a given set of data). Let  $ \mathbf{e} \in \mathbb{R}^d$ be the residuals associated to the estimated model, i.e. $ e_i = f(\mathbf{x}_i ; \hat{\mathbf{p}}) - y_i$. The bias is formally defined as (95):

$\displaystyle E[\hat{\mathbf{p}} ; \underline{\mathbf{p}}, \mathbf{e}] - \under...
...{\mathbf{p}}) \hat{\mathbf{p}} \;\mathrm d \mathbf{e} - \underline{\mathbf{p}},$ (3.14)

where $ E[\hat{\mathbf{p}} ; \underline{\mathbf{p}}, \mathbf{e}]$ is the expectation of having the estimate  $ \hat{\mathbf{p}}$ knowing the true parameters  $ \underline{\mathbf{p}}$ and the errors  $ \mathbf{e}$. Another way to explain the bias of an estimator is to consider a given set of parameters  $ \underline{\mathbf{p}}$. Then an experiment is repeated with the same parameters  $ \underline{\mathbf{p}}$ but with different instance of the noise at each trial. The bias is the difference between the average value of the estimated parameters  $ \hat{\mathbf{p}}$ and the true parameters  $ \underline{\mathbf{p}}$.

A desirable property for an estimator is to be unbiased. This means that on average with an infinite set of data, there is no difference between the estimated model  $ \hat{\mathbf{p}}$ and the corresponding true model  $ \underline{\mathbf{p}}$.

Variance.

Another important property of an estimator is its variance. Imagine an experiment repeated many times with the same parameters  $ \underline{\mathbf{p}}$ but with different instance of the noise at each trial. The variance of the estimator is the variance of the estimated parameters  $ \hat{\mathbf{p}}$. In case where  $ \mathbf{p}$ is a vector, we talk about the covariance matrix instead of the variance. It is formally defined as:

$\displaystyle \mathrm{Var}[\hat{\mathbf{p}} ; \underline{\mathbf{p}} , \mathbf{...
... \underline{\mathbf{p}}, \mathbf{e}])^2 ; \underline{\mathbf{p}} , \mathbf{e}].$ (3.15)

A low variance means that a change in the instance of noise have little effect on the estimated parameters  $ \hat{\mathbf{p}}$. The variance of an estimator is sometimes called its efficiency. For an estimator, it is a desirable property to be efficient, i.e. have a low variance. In other words, its preferable that the noise does not affect too much an estimator.

Mean-squared residual.

Usually, we are more interested in the Mean-Squared Residual (MSR) of an estimator than in its variance. The MSR measures the average error of the estimated parameters with respect to the true parameters. It is formally defined as:

$\displaystyle MSR[\hat{\mathbf{p}} ; \underline{\mathbf{p}} , \mathbf{e}] = E[(...
...\mathbf{p}} - \underline{\mathbf{p}})^2 ; \underline{\mathbf{p}} , \mathbf{e}].$ (3.16)

The variance, the bias and the MSR of an estimator are linked with the following relation:

$\displaystyle MSR[\hat{\mathbf{p}} ; \underline{\mathbf{p}} , \mathbf{e}] = \ma...
...{\mathbf{p}} ; \underline{\mathbf{p}}, \mathbf{e}] - \underline{\mathbf{p}})^2.$ (3.17)

Note that the rightmost term in equation (3.17) is the squared bias of the estimator.

Specific Techniques in Parameter Estimation

In this section we specify some classical estimators that derives from particular assumptions on the distribution of the noise. We will also give the properties of these estimators.


Normal Distribution and Least-Squares

Normal distribution.

The normal distribution, also known as the Gaussian distribution, is a continuous probability distribution used to describe a random variable that concentrates around its mean. This distribution is parametrized by two parameters: the mean  $ \mu \in \mathbb{R}$, and the standard deviation  $ \sigma \in \mathbb{R}_+^*$. The normal distribution with parameters $ \mu$ and $ \sigma$ is denoted  $ \mathcal{N}(\mu, \sigma)$. Table 3.1 gives the main properties of  $ \mathcal{N}(\mu, \sigma)$. Figure 3.1 illustrates the PDF of the normal distribution.


Table 3.1: Some properties of the normal distribution  $ \mathcal{N}(\mu, \sigma)$
Parameters $ \mu \in \mathbb{R}$, $ \sigma \in \mathbb{R}_+$
Support $ \mathbb{R}$
Mean $ \mu$
Variance $ \sigma^2$
PDF $ f_{\mathcal{N}(\mu,\sigma)}(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp \left ( - \frac{(x-\mu)^2}{2\sigma^2} \right )$


Figure 3.1: Graphical representation of the PDF of the normal distribution  $ \mathcal{N}(\mu, \sigma)$. This figure shows that a normally-distributed random variable is concentrated around the distribution mean $ \mu$. For example, in average, $ 95.5\%$ of the values taken by a normally-distributed random variable fall within the interval  $ [-2\sigma, 2\sigma]$. Figure 3.2 illustrates how unlikely it is for a normally-distributed random variable to deviate significantly from the distribution mean.
Image pdf-normal

The multivariate case.

The multivariate normal distribution is a generalization of the normal distribution to higher dimensions, i.e. to random vectors instead of random variables. If we consider random vectors of dimension  $ k \in \mathbb{N}$, then this distribution is parametrized by two parameters: the mean  $ \mathbold{\mu}\in \mathbb{R}^k$ and the covariance matrix  $ \mathsf{\Sigma} \in \mathbb{R}^{k \times k}$. The multivariate normal distribution of dimension $ k$ with parameters  $ \mathbold{\mu}$ and  $ \mathsf{\Sigma}$ is denoted $ \mathcal{N}_k(\mathbold{\mu}, \mathsf{\Sigma})$. The covariance matrix  $ \mathsf{\Sigma}$ is the multidimensional counterpart of the monodimensional variance $ \sigma^2$. It is a nonnegative-definite matrix. The fact that the matrix  $ \mathsf{\Sigma}$ is diagonal indicates that the components of the random vector are independent. Having  $ \mathsf{\Sigma} = \mathbf{I}$ means that the components of the random vector are i.i.d. Table 3.2 sums up the main properties of the multivariate normal distribution.


Table 3.2: Some properties of the multivariate normal distribution .
Parameters $ \mathbold{\mu}\in \mathbb{R}^k$, $ \mathsf{\Sigma} \in \mathbb{R}^{k \times k}$
Support $ \mathbb{R}^k$
Mean $ \mathbold{\mu}$
Variance $ \mathsf{\Sigma}$
PDF $ f_{\mathcal{N}_k(\mathbold{\mu},\mathsf{\Sigma})}(\mathbf{x}) = \frac{1}{\vert...
...bold{\mu})^\mathsf{T}\mathsf{\Sigma}^{-1}(\mathbf{x} - \mathbold{\mu}) \right )$


Link with least-squares estimators.

A least square estimator corresponds to the maximum likelihood with data corrupted with a normally-distributed noise. Let $ \{ \mathbf{x}_i \leftrightarrow \mathbf{y}_i \}_{i=1}^d$ be a dataset corrupted with an additive zero-mean i.i.d. Gaussian noise, i.e. $ \mathbf{e}_i \sim \mathcal {N}_m(\mathbf{0}, \mathsf{\Sigma})$ with $ \mathbf{e}_i = \mathcal {F}(\mathbf{x}_i ; \underline{\mathbf{p}}) - \mathbf{y}_i$, $ \mathcal {F} : \mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}^m$ is the parametric model and $ \underline{\mathbf{p}}$ are the true parameter of the model. The probability of seeing the data point  $ \mathbf{x}_i \leftrightarrow \mathbf{y}_i$ given the parameters  $ \mathbf{p}$ is:

$\displaystyle p(\mathbf{x}_i \leftrightarrow \mathbf{y}_i ; \mathbf{p}) = \frac...
...f{\Sigma}^{-1}(\mathcal {F}(\mathbf{x}_i ; \mathbf{p}) - \mathbf{y}_i)\right ).$ (3.18)

If we apply the principle of MLE of equation (3.11) to equation (3.18), then we obtain the following minimization problem:

$\displaystyle \min_{\mathbf{p}} \sum_{i=1}^d \Vert \mathcal {F}(\mathbf{x}_i ; \mathbf{p}) - \mathbf{y}_i \Vert^2_{\mathsf{\Sigma}},$ (3.19)

where $ \Vert \textrm{\raisebox{1pt}{\tiny $\bullet$}}\Vert _{\mathsf{\Sigma}}$ is the Mahalanobis norm. Equation (3.19) is the very definition of a weighted least-squares minimization problem. Note that if we further assume that the components of the error vectors  $ \mathbf{e}_i$ are i.i.d. or, equivalently, that  $ \mathsf{\Sigma} = \mathbf{I}$, then problem (3.19) reduces to a simple least-squares minimization problem. The tools that allows one to solve these types of problem have been presented in section 2.2.2.


Heavy-tailed Distributions and M-estimators

Inliers, outliers, and robustness of an estimator.

The data used to estimate the parameters of a model are generally contaminated with noise. It is a common practice to assume that this noise is normally-distributed. A common justification for such an assumption is the central limit theorem which states that the sum of a sufficiently large amount of independent random variables is normally-distributed (even if each one of these random variables is not normally-distributed and as long as their mean and variance are defined and finite). However, it is not always reasonable to assume that the data measurements are normally-distributed. In particular, there can be gross errors in the data, which are extremely unlikely if we only consider a normally-distributed noise (see figure 3.2). These `false' measurements are called outliers. Stated otherwise, outliers are arbitrarily large errors. By contrast, measurements that are not outliers are called inliers. An estimator is said to be robust when these outliers does not affect much the estimation.

Figure 3.2: Illustration of the probability density function for a centred normal distribution with standard deviation $ \sigma = 1$. The probability that a realization of this law lies in $ [-\sigma,\sigma]$ is $ 68.3\%$, $ 95.5\%$ in $ [-2\sigma, 2\sigma]$, and $ 99.7\%$ in $ [-3\sigma,3\sigma]$. A value that deviates from the distribution mean more than 5 times the standard deviation is more than extremely unlikely.
Image gaussian_weak_prob

M-estimators.

M-estimators are robust estimators that are constructed from MLE (or MAP) assuming a non-normally-distributed noise. The principle consists in taking a distribution that has a PDF with `heavy tails'. Having such tails means that large errors are less improbable than it would be with the normal distribution. Many different M-estimators based upon several noise distributions have been proposed. Note that some of these M-estimators are statistically grounded in the sense that the considered distributions are related to actual facts. Other M-estimators are a little bit more artificial in the sense that their underlying probability distributions have been made up for the need of robustness. Sometimes, the underlying probability distribution of an M-estimator is not even an actual probability distribution since it does not sums up to 1. In this case, the `probability distribution' is to be considered as some kind of `score function' which takes high values for probable inputs and lower values for less probable inputs. This does not forbid one to apply the MLE principle to build a cost function.

We now give some notation and vocabulary on M-estimators. After that, we will detail some typical M-estimators.

Notation and vocabulary on M-estimators.

An M-estimator is typically defined using what is called a $ \rho$-function. The $ \rho$ function of an M-estimator is a function from  $ \mathbb{R}$ to $ \mathbb{R}_+$ that defines the underlying probability distribution3.2 of the M-estimator:

$\displaystyle p(\mathbf{e}) = \exp(-\rho(\mathbf{e})).$ (3.20)

The cost function minimized by the M-estimator is obtained by applying the MLE principle to the distribution of equation (3.20), which gives:

$\displaystyle \min_{\mathbf{p}} \sum_{i=1}^d \rho(\mathbf{e}_i).$ (3.21)

Note that the least-squares cost function is just a particular case of M-estimator with $ \rho(x) = x^2$. Two other interesting functions can be associated to an M-estimator: the influence function and the weight function. The influence function, denoted $ \psi$, is defined as:

$\displaystyle \psi(x) \stackrel{\mathrm{def}}{=} \frac{\partial \rho}{\partial x} (x).$ (3.22)

The weight function $ w$, also known as the attenuation function, is defined as:

$\displaystyle w(x) \stackrel{\mathrm{def}}{=} \frac{\psi(x)}{x}.$ (3.23)

It measures the weight given to a particular measurement in the cost function according to the error it produces. With M-estimators, the principle is to give large weights to small errors and small weights to large errors (which are likely to come from outliers). Note that, as defined in equation (3.23), the weight function $ w$ is not defined at zero. It is generally possible to define it at this particular point with a continuous extension.

Common M-estimators.

Here comes a list of some typical M-estimators along with some comments. A graphical illustration of these M-estimators is given in figure 3.3. Other M-estimators can be found in, for instance, (94) or (201).

Figure 3.3: Illustration of the different M-estimators presented in section 3.1.2.2.
 
$ \rho$-function
Influence function ($ \psi$)
Weight function ($ w$)
[Pseudo-] distribution ( $ \exp(-\rho)$) Least-squares
$ L_1$ norm Image l1-rho Image l1-psi Image l1-weight Image l1-likelihood
Huber Image huber-rho Image huber-psi Image huber-weight Image huber-likelihood
Tukey Image bisquare-rho Image bisquare-psi Image bisquare-weight Image bisquare-likelihood
Cauchy Image cauchy-rho Image cauchy-psi Image cauchy-weight Image cauchy-likelihood
Blake-Zisserman Image blake-rho Image blake-psi Image blake-weight Image blake-likelihood
Corrupted Gaussian Image corrgauss-rho Image corrgauss-psi Image corrgauss-weight Image corrgauss-likelihood

Optimization of M-estimators.

Let  $ f : \mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}$ be the parametric model and let  $ \{ \mathbf{x}_i \leftrightarrow y_i \}_{i=1}^d$ be the dataset. Let us denote $ e_i$ the $ i$-th residual: $ e_i(\mathbf{p}) = f(\mathbf{x}_i ; \mathbf{p}) - y_i$. The general form of an M-estimator cost function is:

$\displaystyle \min_{\mathbf{p}} \sum_{i=1}^d \rho(e_i(\mathbf{p})).$ (3.24)

Solving this problem can be achieved with a generic optimization algorithm such as the Newton method. However, it is possible to turn problem (3.24) into a weighted least-squares problem with variable weights which can be solved with IRLS. Using IRLS is sometimes preferable than using a generic algorithm such as Newton's method. Turning problem (3.24) into an IRLS problem is achieved by writing the optimality condition (see section 2.2) and developing them in the following manner:

$\displaystyle \boldsymbol{\nabla}_{\mathbf{p}} \left ( \sum_{i=1}^d \rho(e_i(\mathbf{p})) \right ) = \mathbf{0} \quad$ $\displaystyle \Leftrightarrow\quad \sum_{i=1}^d \psi(e_i(\mathbf{p})) \boldsymbol{\nabla}_{\mathbf{p}} e_i(\mathbf{p}) = \mathbf{0}$    
  $\displaystyle \Leftrightarrow\quad \sum_{i=1}^d w(e_i(\mathbf{p})) e_i(\mathbf{p}) \boldsymbol{\nabla}_{\mathbf{p}} e_i(\mathbf{p}) = \mathbf{0}$    
  $\displaystyle \Leftrightarrow\quad \sum_{i=1}^d w(e_i(\mathbf{p})) \frac{1}{2} \boldsymbol{\nabla}_{\mathbf{p}} \left ( e_i^2(\mathbf{p}) \right ) = \mathbf{0}.$ (3.25)

If we consider the value of  $ \mathbf{p}$ fixed in the factor  $ w(e_i(\mathbf{p}))$, then equation (3.25) corresponds to the optimality condition of the following weighted least-squares problem:

$\displaystyle \min_{\mathbf{p}} \sum_{i=1}^d w_i e_i^2(\mathbf{p}),$ (3.26)

with $ w_i = w(e_i(\mathbf{p}))$. Therefore, the initial problem (3.24) can be solved with the IRLS principle by iteratively updating the values $ w_i$ for $ i \in \llbracket 1,d \rrbracket $.

Note that minimizing an M-estimator cost function is not a trivial task even though an optimization algorithm is available. Indeed, the M-estimator cost functions are generally not convex (except for the $ L_1$ norm and the Huber M-estimator). It is thus easy to fall in a local minimum.


Other robust estimators

M-estimators are not the only robust estimators. There exist other methods that cope with the presence of outliers in the data. We will not detail here the methods not used in the rest of this document. For instance, RANSAC (RANdom SAmple Consensus) (70) is a popular robust estimators able to fit a parametric model to noisy data containing outliers. The least median of squares (64,161) is another robust estimator which consists in replacing the sum of squares in the classical least-squares approach by the median of the squares:

$\displaystyle \min_{\mathbf{p}} \mathop{\mathrm{med}}_{i \in \llbracket 1,d \rrbracket } (f(\mathbf{x}_i ; \mathbf{p}) - y_i)^2.$ (3.27)

This approach is based on the fact that the median is a robust statistic while the mean is not (see the next paragraph). A common technique to solve problem (3.27) is to use a sampling approach. While this is possible and effective when the number of parameters to estimate is limited, it may become intractable for large numbers of parameters.

Mean, median, and trimmed mean.

Sometimes, it is interesting to get information on the central tendency of a set of values $ \{ x_i \}_{i=1}^d$. To do so, using the (arithmetic) mean of the dataset seems to be a natural choice. However, the mean is not a robust statistic. Indeed, a single outlier in the dataset can perturb as much as we want this statistic (the breakdown point of the mean is 0).

Another measure of central tendency is the median. The median is the value that separates the higher half and the lower half of the dataset. Since the outliers are among the highest or the lowest values of the dataset, they do not perturb the median. The breakdown point of the median is $ \frac{1}{2}$.

The trimmed mean is another robust central statistic based on the same idea (i.e. that outliers are among the highest or the lowest values of the dataset). The trimmed is defined as the mean of the central values of the dataset (i.e. the initial dataset with the $ \alpha$ lowest and highest values removed). The breakdown point of the trimmed mean is $ \frac{2\alpha}{n}$, with $ n$ the total number of points.


Contributions to Parametric Image Registration and 3D Surface Reconstruction (Ph.D. dissertation, November 2010) - Florent Brunet
Webpage generated on July 2011
PDF version (11 Mo)