Monday, 16 September 2013

Numerical Errors, Perturbation Analysis and Machine Learning

Everyone hates numerical errors. We love to think that computers are machines with infinite precision. When I was a student, I really hated error analysis. It sounded like a subject that is set out to study an annoying side-effect of our imperfect computers, a boring detail that is miles away from anything that anyone would ever consider a nice part of mathematics. I will not try to convince you today that the opposite is true. However, even in error analysis there are some nice ideas and lessons to be learned. This post asks the question whether, if you are doing machine learning, you should care about numerical errors.

This issue should be well understood. However, I don't think that it is as well appreciated as it should be, or that it received the attention it should. In fact, I doubt that the issue is discussed in any of the recent machine learning textbooks beyond the usual caveat "beware the numerical errors" (scary!).

In this blog, I will illustrate the question of numerical errors and stability with the problem of learning a real-valued function on the $[0,1]$ interval. Let $f$ be this unknown function. We will assume that $f$ is square integrable as our objective will be to produce an approximation to $f$ that is close to $f$ in the (unweighted) $2$-norm. For simplicity, let as assume that we have decided to approximate $f$ using polynomials of degree $d-1$. Thus, defining $G = \{ g \,:\, g(x) = \sum_{j=0}^{d-1} \alpha_j x^j, \alpha_0,\ldots,\alpha_{d-1}\in \mathbb{R} \}$, our goal is to find the minimizer

$P(f) = \arg\min_{g\in G} \|f-g\|_2$,                                 (Proj)

where $\|f\|_2^2 = \int_0^1 f^2(x) dx$. Clearly, $G$ is a $d$-dimensional subspace of $L^2([0,1])$. To represent the functions in $G$, let us choose a basis of $G$; call the $j$th basis element $p_j$ ($j=1,\ldots,d$). Thus, $p_j:[0,1] \rightarrow \mathbb{R}$ is some polynomial. One possibility, in the lack of better idea, is to choose $p_j(x) = x^j$; if you want, I use $(p_j)_{j=1}^d$ just for the sake of increasing generality (and to play with you a bit later).

Since $(p_j)_j$ is a basis, any function $g$ of $G$ can be uniquely written as $g = \sum_{j=1}^d \theta_j p_j$. Since $g^* = P(f)$ is also an element of $G$, it can also be written in this form: $g^* = \sum_{j=1}^d \theta_j^* p_j$. To figure out $g^*$, it is thus sufficient to figure out the vector $\theta^* = (\theta_j^*)_j$. To find this vector, we can find the minimum of

$J(\theta) = \| f - \sum_{j=1}^d \theta_j p_j \|_2^2$ 

(since $u \mapsto u^2$ is monotone on $[0,\infty)$). For this, it is useful to remember that for $f\in L^2([0,1])$, $\|f\|_2^2 =\langle f,f \rangle$, where the inner produce $\langle \cdot,\cdot\rangle$ is defined for all $f,g\in L^2([0,1])$ using $\langle f,g \rangle = \int_0^1 f(x)g(x) dx$. Taking the derivative of $J(\theta)$ with respect to $\theta_i$ and equating the result with zero, we get,

$0=\frac{d}{d\theta_i} J(\theta) = 2 \langle \sum_{j=1}^d \theta_j p_j-f, p_i \rangle$.

Using the symmetry and linearity of the inner product $\langle \cdot,\cdot\rangle$,  introducing the matrix $A$ whose $(i,j)$th element is $a_{ij} = \langle p_i, p_j \rangle$ and the vector $b$ whose $i$th element is $\langle f,p_i \rangle$, collecting the above equations for $i=1,\ldots,d$ and reordering the terms, we see that these equations can be written in the short form

$A\theta = b$.

Note that here $A$ is a positive definite matrix. Solving this linear system thus will give us $\theta^*$ and hence $g^*$.

So far so good. However, in practice, we rarely have access to the true function $f$. One commonly studied set-up in machine learning assumes that we have a noise "sample" of the form $((X_t,Y_t);t=1,\ldots,n)$ only, where $(X_t,Y_t)$ is i.i.d. and $\mathbb{E}[Y_t|X_t] = f(X_t)$ and $\mathrm{Var}(Y_t|X_t)\le c<\infty$. However, I do not want to go this far, but consider a conceptually simpler case when instead of $f$, we can access $\hat{f}: [0,1] \rightarrow \mathbb{R}$. Think of $\hat{f}$ as the "noise corrupted" version of $f$. Later it should become clear how the reasoning that follows extends to the case when only a finite dataset of the previously mentioned form is available.

Realizing that we do not have access to $f$, but only to $\hat{f}=f+\Delta f$, it is clear that we won't be able to calculate $g^* = P(f)$, but only (say) $P(\hat{f})$. Let's call the result $\hat{g}^*$: $\hat{g}^* = P(\hat{f})$. How far is $\hat{g}^*$ from $f$?

By the Pythagorean theorem, $\|f-\hat{g}^*\|_2^2 = \| f- g^* \|_2^2 + \|g^* - \hat{g}^* \|_2^2$.
Here, $ \| f- g^* \|_2^2$ is the error that can only be reduced by increasing the degree $d$ (or increasing $G$), while  $\|g^* - \hat{g}^* \|_2^2$ is the error due to starting with the noisy function $\hat{f}$ and not with $f$. The problem of studying the size of $\|f- g^* \|_2^2$ is the subject of approximation theory (where various nice results show how, e.g., the degree of smoothness and $d$ jointly influence the rate at which this error decreases to zero as $d$ goes to infinite) and this error is called the approximation error. We won't get into this nice subject, perhaps in a future post.

So it remains to calculate how big the other term is (which is normally called the estimation error, as it measures the effect of the stochastic error $\Delta f$). One way to understand this term is by using the error analysis (a.k.a. stability/perturbation analysis) of linear systems. Perhaps this is not the simplest way, but by pursuing this direction, the article will be more entertaining perhaps and hopefully we will also make some useful observations. Readers suspecting mischief are asked to stay calm..

Let us start this error analysis by observing that if we write $\hat{g}^* =\sum_{j=1}\hat{\theta}_j^* p_j$ then the vector $\hat{\theta}^*$ will be the solution to the linear system $A \theta = b+\Delta b$, where $\hat{\Delta b}_j = \langle\Delta f,p_j\rangle$, $j=1,\ldots,d$. Now, $\| \sum_{j=1}^d \theta_j p_j \|_2^2 = \theta^\top A \theta$, hence

$\|g^* - \hat{g}^* \|_2^2 = ({\theta}^*-\hat{\theta}^*)^\top A ({\theta}^*-\hat{\theta}^*)$.

Since $A \hat{\theta}^* = b+\Delta b$ and $A {\theta}^* = {b}$, subtracting the two, we get $A ( \hat{\theta}^*- {\theta}^*)  = \Delta b$, from where we get that

$({\theta}^*-\hat{\theta}^*)^\top A ({\theta}^*-\hat{\theta}^*) = (\Delta b)^\top A^{-1} \Delta b$.

Hence,

 $\|g^* - \hat{g}^* \|_2^2= (\Delta b)^\top A^{-1} \Delta b\le \lambda_{\min}^{-1}(A) \|\Delta b \|_2^2$,                            (*)

where $\lambda_{\min}(A)$ denotes the minimum eigenvalue of $A$. Note that by appropriately choosing the error term $\Delta b$, the inequality can be made sharp. What inequality (*) thus means is that the error caused by $\hat{f}-f$ can be badly enlarged if $\Delta b$ has an unfortunate relation to $A$.

Readers familiar with the error-analysis of linear systems of equation will recognize that (*) is similar but still different to the inequality that shows how errors propagate in the solution of linear systems due to coefficient errors (including errors in $b$). The difference is easy to understand: It is caused by the fact that in numerical analysis the error of the parameter-vector is measured in the (unweighted) 2-norm, whereas here the error is shown for the norm defined using $\|x\|_{A}^2 = x^\top A x$; in this application this error is more natural than the usual $2$-norm. Thus, this is one of the lessons: Use the norm that is the most appropriate for the application (because of this change in the norms, instead of the conditioning number of $A$, we see the inverse of the minimum eigenvalue of $A$).

To see how bad (*) can be, consider the case when $p_j(x) = x^{j-1}$, i.e., the standard polynomial basis. In this case, $a_{ij} = \int_0^1 x^{i+j-2} dx = [x^{i+j-1}/(i+j-1)]_0^1 = 1/(i+j-1)$. The resulting matrix is an old friend, the so-called Hilbert matrix. In fact, this matrix is most infamous for its horrible conditioning, but most of the blame should go to how small the minimum eigenvalue of $A$ is. In fact, a result of Szegő (1975) shows that as $d\rightarrow \infty$, $\lambda_{\min}(A) \sim 2^{15/4} \pi^{3/2} d^{1/2} (\sqrt{2}-1)^{4d+4}$ (an amusing asymptotic relationship on its own). This the reciprocal value of the minimum eigenvalue of $A$ is exponentially large in $d$, thus, potentially totally defeating the purpose of increasing $d$ to reduce the approximation error.

Thus, perhaps we should choose a different system of polynomials. Indeed, once we know that the goal is to keep $\lambda_{\min}(A)$ large, we can choose the polynomials so as to make this happen. One way, which also helps considerably with speeding up the calculations, is to choose $(p_j)$ such that $A$ is diagonal and if diagonal, why not choose $(p_j)$ such that $A$ becomes the identity matrix. We can simply start by $p_1 = 1$ (because $\int_0^1 1^2 dx=1$) and then choose the coefficients of $p_2(x) = a_{20} + a_{21} x$ such that $\langle p_2,p_2 \rangle=1$ and $\langle p_1,p_2 \rangle = 0$ (this gives $[a_{20} x+ 2 a_{20}a_{21} x^2/2+a_{21}^2 x^3/3]_0^1=1$ and $[a_{20}x + a_{21} x^2/2]_0^1=0$, which can be solved for the coefficients $a_{20}$, $a_{21}$). Continuing the same way, we can find $p_1,\ldots,p_d$, which are known as the shifted Legendre polynomials (the unshifted versions are defined with the same way for the $[-1,1]$ interval).

At this stage, however, the astute reader should really wonder whether I have lost my sense of reality! How on the earth should the choice of the basis of $G$ influence the error caused by the "noisy measurement"??? In fact, a very simple reasoning shows that the basis (and thus $A$) should not change how big $\|\hat{g}^*-g^* \|_2^2$ is. To see why note that $\hat{g}^*-g^* =   P(\hat{f})-P(f)$. Up to know, I have carefully avoided giving a name to $P$, but in fact, $P$ does not have to be introduced: It is the well-known orthogonal projection operator. As such, it is a linear operator. Hence, $ P(\hat{f})-P(f) = P(  \hat{f}-f ) = P(\Delta f)$. It is also known, that projections cannot make vectors larger (they are non-expansions in the 2-norm that defines them). Hence, $\|P(\Delta f)\|_2 \le \|\Delta f\|_2$, which in summary means that

$\|\hat{g}^*-g^*\|_2 \le \|\Delta f \|_2$.

The "measurement" error of $f$ cannot be enlarged by projecting the measured function to the function space $G$. How do we reconcile this with (*)? One explanation is that (*) is too conservative; it is a gross over-estimate. However, we also said that the only inequality that was used could be sharp if $\Delta b$ was selected in an "unfortunate" fashion. So that inequality could be tight. Is this a contradiction? [Readers are of course encouraged to stop here and think this through before continuing with reading the rest.]

No, there is no contradiction. All the derivations were sounds. The answer to the puzzle is that $\Delta b$ cannot be selected arbitrarily. By writing $\Delta f = \Delta f^{\parallel}+ \Delta f^{\perp}$, where $\Delta f^{\parallel}\in G$ and $\Delta f^{\perp}$ is perpendicular to $G$, we see that $\Delta b_j = \langle \Delta f^{\parallel}, p_j\rangle$. Since $\Delta f^{\parallel}\in G$, it can be written as $\sum_j \gamma_j p_j$ with some coefficients $(\gamma_j)$. Hence, $\Delta b_j = \langle \Delta f, p_j \rangle  = \sum_k \gamma_k \langle p_k,p_j\rangle$, or $\Delta b = A \gamma$! Thus $\Delta b$ is restricted to lie in the range of $A$. The rest is easy:

$\|g^* - \hat{g}^* \|_2^2= (\Delta b)^\top A^{-1} \Delta b = \gamma^\top A \gamma = \|P(\Delta f) \|_2^2$.

The conclusion? Was this a pointless exercise? Is the stability analysis of linear systems irrelevant to machine learning? No, not at all! What we can conclude is that although the basis chosen does not influence the error in the final approximator that can be attributed to errors in measuring the unknown function, this conclusion holds only if we assume infinite precision arithmetic. With finite precision arithmetic, our analysis shows that rounding errors can be blown up exponentially with the dimension of the subspace $G$ growing to infinity. Thus, with an improperly chosen basis, the rounding errors can totally offset or even reverse the benefit that is expected from increasing the dimensionality of the approximation space $G$.

 If you cannot calculate it with a computer, you cannot estimate it either.

 (The careful reader may wonder about whether the rounding errors $\tilde{\Delta b}$ can lie in the dreaded subspace of $A$, but in this case they can.) Hence, when consider a finite dimensional approximation, care must be taken to choose the basis functions so that the resulting numerical problem is stable.

Finally, one commenter on the blog once noted that I should post more about reinforcement learning. So, let me finish by remarking that the same issues exist in reinforcement learning, as well. A basic problem in reinforcement learning is to estimate the value function underlying a stationary Markov policy. Long story short, such a policy gives rise to a Markov chain, with a transition probability kernel $\mathcal{P}$. We can view $\mathcal{P}$ as an operator that maps bounded measurable $f:X \rightarrow \mathbb{R}$ functions to akin functions: if $g=\mathcal{P} (f)$, $g(x) = \int_{y} f(y) \mathcal{P}(dy|x)$, where the integration is over the state space $X$. Evaluating the said policy then amounts to solving

$(I-\gamma\mathcal{P}) f = r$                           (PEval)

in $f$, where $0\le \gamma < 1$ is the so-called discount factor and $r:X \rightarrow \mathbb{R}$ is the so-called reward function (again, $r$ is assumed to be bounded, measurable). A common method to solve linear inverse problems like the above is Galerkin's method. Long story short, this amounts to selecting an orthogonal basis $(\psi_k)$ in $L^2(X;\mu)$ with $\mu$ being a probability measure and then solving the said system in $G= \{ f\,:\, f =\sum_{k=1}^d \alpha_k \psi_k, \alpha_1,\ldots,\alpha_d\in \mathbb{R} \}$ in the sense that the solution $g\in G$ is required to satisfy (PEval) in the 'weak sense': $\langle ((I-\gamma\mathcal{P})g,\psi_k\rangle = \langle r,\psi_k\rangle$, $k=1,\ldots,d$. As it turns out, there are numerically less and more stable ways of selecting the basis $(\psi_k)$, though figuring out how to do this in the practical settings when $I-\gamma\mathcal{P}$ is unknown but can be sampled remains for future work.

Bibliography:
Szegő G. 1982 On some Hermitian forms associated with two given curves of the complex plane, Collected Papers, vol 2. (Basle: Birkhauser) p 666.

Sunday, 15 September 2013

Student Response Systems -- A Year Later

I have promised to come back with my experience after the Fall semester of 2013. Though the end of that semester passed a long ago, here are some thoughts:

Overall my experience of socrative was very positive. During the first week I polled the class to see how large a percentage of the class has some kind of wifi enabled device that they could use. 90 out of the 95 students had some kind of device that they were willing to bring to the class, so I decided to give socrative a try.

Socrative helped me tremendously to stay on the top of what everyone in the class knows. The way I used socrative was as follows: After every block of new material (usually 5-10 slides), I inserted a few questions to verify whether the students "got" the new material. This was all the "extra work" that I had to put in designing the class material due to socrative. And I would have done something similar without socrative anyways, so in fact I did not feel much of a difference here. Once the question was projected, I started the question on socrative by pushing a button and then the students could start entering their answers. The answers are shown on the teacher's screen. Based on the feedback received, I could then decide whether I needed to re-explain something.

I used a tablet to connect to socrative, while I used a second computer to project my slides: This was handy as I did not have to exit the presentation to start the question in socrative. In the classroom there was a projector that could project the tablet's screen. Sometimes I have shown the feedback screen even before I got all the answers in. The students seemed to like this as it created an interesting dynamic in the class.

I used multiple choice questions mostly, but I also used yes/no questions occasionally. In addition to these, I also experimented with "short answer" questions. This latter question-type would be very useful and I generally prefer it. It is not only less work to prepare "short answer" questions, but the they are usually better at testing what the students know. To put in another way, it is really hard (and time-consuming) to design good multiple choice questions with valid-looking alternatives (if you have some ideas of how to automate this, I am all ears). Examples of the "short answer" questions I asked are: "What is the keyword that should go in this program in this place?" or "What will this program print?". 

The feedback screen after a quiz-type question shows a histogram which is updated on the fly, so if you want, you can steer things in the class by showing how the histogram evolves. The same works for yes/no questions. Unfortunately, the feedback screens for the short answer questions are less than ideal as it shows the list of answers as they are coming in. On this list the same answers in fact can be repeated many times.. Needless to say, this is not the best way of presenting this information. It would be much nicer, for example, if there was a histogram representing the top answers. 

I also replaced the end-of-class questionnaire of previous years with the "exit quiz" that socrative provided. Unfortunately, the questions on the exit quiz are not configurable and some of socrative's standard questions caused permanent confusion ("solve the problem on the board" -- we did not even have a board!). Also, to my surprise, the exit-quiz appeared to be less effective than the paper-based questionnaire in facilitating comments. Later, I figured that I can leave the exit-quiz open for a few days after class to collect more feedback; this helped, but unfortunately only just a little bit. Reading through the exit-quiz responses is also a bit of extra work if you have not done this before, but this was not the case for me. And I actually like to read through these extra comments; they are very useful most of the time.

Once, I also experimented with socrative's "quiz" component. This allows teachers to compile a longer list of questions (a quiz) that the students can solve in the class (either timed or not timed). This quiz was not marked. Unfortunately, there were too many technical difficulties: socrative was overloaded (the class size was around 90). Also, socrative eliminated leading whitespace characters from the answers, which was quite unfortunate as whitespaces are crucially important in Python, the programming language that we used in this class. Thus, I decided not to use the quiz again.

In conclusion, although the system is not perfect, it was usable and overall it helped both the students and me. I have received many comments from the student praising how socrative was useful to engage everyone, including the shy students. As this was my main motivation to try this system, I conclude that the experiment was successful.

PS: Next semester, I tried again. Unfortunately, in this semester we were in a different classroom, where first the wifi service became unreliable and then it completely stopped working. I have no idea why. No one had. I asked the technical support people to fix it but all I got was promises. This was sad. Luckily, this was a smaller class (around 50), so we could still have some true interaction with the students.

PPS: I have also used in the first semester a website where students could solve hundreds of little programming problems. The site provided the questions and it also have feedback, even including things like "did you forget to initialize your variable". Again, the tool had glitches (the question bank was not very well developed), but overall I think this was also a great tool.