Constrained MDPs and the reward hypothesis

It's been a looong ago that I posted on this blog. But this should not mean the blog is dead. Slow and steady wins the race, right? Anyhow, I am back and today I want to write about constrained Markovian Decision Process (CMDPs). The post is prompted by a recent visit of Eugene Feinberg , a pioneer of CMDPs, of our department, and also by a growing interest in CMPDs in the RL community (see this , this , or this paper). For impatient readers, a CMDP is like an MDP except that there are multiple reward functions, one of which is used to set the optimization objective, while the others are used to restrict what policies can do. Now, it seems to me that more often than not the problems we want to solve are easiest to specify using multiple objectives (in fact, this is a borderline tautology!). An example, which given our current sad situation is hard to escape, is deciding what interventions a government should apply to limit the spread of a virus while maintaining economic ...

Numerical Errors, Perturbation Analysis and Machine Learning

Everyone hates numerical errors. We love to think that computers are machines with infinite precision. When I was a student, I really hated error analysis. It sounded like a subject that is set out to study an annoying side-effect of our imperfect computers, a boring detail that is miles away from anything that anyone would ever consider a nice part of mathematics. I will not try to convince you today that the opposite is true. However, even in error analysis there are some nice ideas and lessons to be learned. This post asks the question whether, if you are doing machine learning, you should care about numerical errors.

This issue should be well understood. However, I don't think that it is as well appreciated as it should be, or that it received the attention it should. In fact, I doubt that the issue is discussed in any of the recent machine learning textbooks beyond the usual caveat "beware the numerical errors" (scary!).

In this blog, I will illustrate the question of numerical errors and stability with the problem of learning a real-valued function on the [0,1] interval. Let f be this unknown function. We will assume that f is square integrable as our objective will be to produce an approximation to f that is close to f in the (unweighted) 2-norm. For simplicity, let as assume that we have decided to approximate f using polynomials of degree d1. Thus, defining G={g:g(x)=d1j=0αjxj,α0,,αd1R}, our goal is to find the minimizer

P(f)=argmingGfg2,                                 (Proj)

where f22=10f2(x)dx. Clearly, G is a d-dimensional subspace of L2([0,1]). To represent the functions in G, let us choose a basis of G; call the jth basis element pj (j=1,,d). Thus, pj:[0,1]R is some polynomial. One possibility, in the lack of better idea, is to choose pj(x)=xj; if you want, I use (pj)dj=1 just for the sake of increasing generality (and to play with you a bit later).

Since (pj)j is a basis, any function g of G can be uniquely written as g=dj=1θjpj. Since g=P(f) is also an element of G, it can also be written in this form: g=dj=1θjpj. To figure out g, it is thus sufficient to figure out the vector θ=(θj)j. To find this vector, we can find the minimum of

J(θ)=fdj=1θjpj22 

(since uu2 is monotone on [0,)). For this, it is useful to remember that for fL2([0,1]), f22=f,f, where the inner produce , is defined for all f,gL2([0,1]) using f,g=10f(x)g(x)dx. Taking the derivative of J(θ) with respect to θi and equating the result with zero, we get,

0=ddθiJ(θ)=2dj=1θjpjf,pi.

Using the symmetry and linearity of the inner product ,,  introducing the matrix A whose (i,j)th element is aij=pi,pj and the vector b whose ith element is f,pi, collecting the above equations for i=1,,d and reordering the terms, we see that these equations can be written in the short form

Aθ=b.

Note that here A is a positive definite matrix. Solving this linear system thus will give us θ and hence g.

So far so good. However, in practice, we rarely have access to the true function f. One commonly studied set-up in machine learning assumes that we have a noise "sample" of the form ((Xt,Yt);t=1,,n) only, where (Xt,Yt) is i.i.d. and E[Yt|Xt]=f(Xt) and Var(Yt|Xt)c<. However, I do not want to go this far, but consider a conceptually simpler case when instead of f, we can access ˆf:[0,1]R. Think of ˆf as the "noise corrupted" version of f. Later it should become clear how the reasoning that follows extends to the case when only a finite dataset of the previously mentioned form is available.

Realizing that we do not have access to f, but only to ˆf=f+Δf, it is clear that we won't be able to calculate g=P(f), but only (say) P(ˆf). Let's call the result ˆg: ˆg=P(ˆf). How far is ˆg from f?

By the Pythagorean theorem, fˆg22=fg22+gˆg22.
Here, fg22 is the error that can only be reduced by increasing the degree d (or increasing G), while  gˆg22 is the error due to starting with the noisy function ˆf and not with f. The problem of studying the size of fg22 is the subject of approximation theory (where various nice results show how, e.g., the degree of smoothness and d jointly influence the rate at which this error decreases to zero as d goes to infinite) and this error is called the approximation error. We won't get into this nice subject, perhaps in a future post.

So it remains to calculate how big the other term is (which is normally called the estimation error, as it measures the effect of the stochastic error Δf). One way to understand this term is by using the error analysis (a.k.a. stability/perturbation analysis) of linear systems. Perhaps this is not the simplest way, but by pursuing this direction, the article will be more entertaining perhaps and hopefully we will also make some useful observations. Readers suspecting mischief are asked to stay calm..

Let us start this error analysis by observing that if we write ˆg=j=1ˆθjpj then the vector ˆθ will be the solution to the linear system Aθ=b+Δb, where ^Δbj=Δf,pj, j=1,,d. Now, dj=1θjpj22=θAθ, hence

gˆg22=(θˆθ)A(θˆθ).

Since Aˆθ=b+Δb and Aθ=b, subtracting the two, we get A(ˆθθ)=Δb, from where we get that

(θˆθ)A(θˆθ)=(Δb)A1Δb.

Hence,

 gˆg22=(Δb)A1Δbλ1min(A)Δb22,                            (*)

where λmin(A) denotes the minimum eigenvalue of A. Note that by appropriately choosing the error term Δb, the inequality can be made sharp. What inequality (*) thus means is that the error caused by ˆff can be badly enlarged if Δb has an unfortunate relation to A.

Readers familiar with the error-analysis of linear systems of equation will recognize that (*) is similar but still different to the inequality that shows how errors propagate in the solution of linear systems due to coefficient errors (including errors in b). The difference is easy to understand: It is caused by the fact that in numerical analysis the error of the parameter-vector is measured in the (unweighted) 2-norm, whereas here the error is shown for the norm defined using x2A=xAx; in this application this error is more natural than the usual 2-norm. Thus, this is one of the lessons: Use the norm that is the most appropriate for the application (because of this change in the norms, instead of the conditioning number of A, we see the inverse of the minimum eigenvalue of A).

To see how bad (*) can be, consider the case when pj(x)=xj1, i.e., the standard polynomial basis. In this case, aij=10xi+j2dx=[xi+j1/(i+j1)]10=1/(i+j1). The resulting matrix is an old friend, the so-called Hilbert matrix. In fact, this matrix is most infamous for its horrible conditioning, but most of the blame should go to how small the minimum eigenvalue of A is. In fact, a result of Szegő (1975) shows that as d, λmin(A)215/4π3/2d1/2(21)4d+4 (an amusing asymptotic relationship on its own). This the reciprocal value of the minimum eigenvalue of A is exponentially large in d, thus, potentially totally defeating the purpose of increasing d to reduce the approximation error.

Thus, perhaps we should choose a different system of polynomials. Indeed, once we know that the goal is to keep λmin(A) large, we can choose the polynomials so as to make this happen. One way, which also helps considerably with speeding up the calculations, is to choose (pj) such that A is diagonal and if diagonal, why not choose (pj) such that A becomes the identity matrix. We can simply start by p1=1 (because 1012dx=1) and then choose the coefficients of p2(x)=a20+a21x such that p2,p2=1 and p1,p2=0 (this gives [a20x+2a20a21x2/2+a221x3/3]10=1 and [a20x+a21x2/2]10=0, which can be solved for the coefficients a20, a21). Continuing the same way, we can find p1,,pd, which are known as the shifted Legendre polynomials (the unshifted versions are defined with the same way for the [1,1] interval).

At this stage, however, the astute reader should really wonder whether I have lost my sense of reality! How on the earth should the choice of the basis of G influence the error caused by the "noisy measurement"??? In fact, a very simple reasoning shows that the basis (and thus A) should not change how big ˆgg22 is. To see why note that ˆgg=P(ˆf)P(f). Up to know, I have carefully avoided giving a name to P, but in fact, P does not have to be introduced: It is the well-known orthogonal projection operator. As such, it is a linear operator. Hence, P(ˆf)P(f)=P(ˆff)=P(Δf). It is also known, that projections cannot make vectors larger (they are non-expansions in the 2-norm that defines them). Hence, P(Δf)2Δf2, which in summary means that

ˆgg2Δf2.

The "measurement" error of f cannot be enlarged by projecting the measured function to the function space G. How do we reconcile this with (*)? One explanation is that (*) is too conservative; it is a gross over-estimate. However, we also said that the only inequality that was used could be sharp if Δb was selected in an "unfortunate" fashion. So that inequality could be tight. Is this a contradiction? [Readers are of course encouraged to stop here and think this through before continuing with reading the rest.]

No, there is no contradiction. All the derivations were sounds. The answer to the puzzle is that Δb cannot be selected arbitrarily. By writing Δf=Δf+Δf, where ΔfG and Δf is perpendicular to G, we see that Δbj=Δf,pj. Since ΔfG, it can be written as jγjpj with some coefficients (γj). Hence, Δbj=Δf,pj=kγkpk,pj, or Δb=Aγ! Thus Δb is restricted to lie in the range of A. The rest is easy:

gˆg22=(Δb)A1Δb=γAγ=P(Δf)22.

The conclusion? Was this a pointless exercise? Is the stability analysis of linear systems irrelevant to machine learning? No, not at all! What we can conclude is that although the basis chosen does not influence the error in the final approximator that can be attributed to errors in measuring the unknown function, this conclusion holds only if we assume infinite precision arithmetic. With finite precision arithmetic, our analysis shows that rounding errors can be blown up exponentially with the dimension of the subspace G growing to infinity. Thus, with an improperly chosen basis, the rounding errors can totally offset or even reverse the benefit that is expected from increasing the dimensionality of the approximation space G.

 If you cannot calculate it with a computer, you cannot estimate it either.

 (The careful reader may wonder about whether the rounding errors ~Δb can lie in the dreaded subspace of A, but in this case they can.) Hence, when consider a finite dimensional approximation, care must be taken to choose the basis functions so that the resulting numerical problem is stable.

Finally, one commenter on the blog once noted that I should post more about reinforcement learning. So, let me finish by remarking that the same issues exist in reinforcement learning, as well. A basic problem in reinforcement learning is to estimate the value function underlying a stationary Markov policy. Long story short, such a policy gives rise to a Markov chain, with a transition probability kernel P. We can view P as an operator that maps bounded measurable f:XR functions to akin functions: if g=P(f), g(x)=yf(y)P(dy|x), where the integration is over the state space X. Evaluating the said policy then amounts to solving

(IγP)f=r                           (PEval)

in f, where 0γ<1 is the so-called discount factor and r:XR is the so-called reward function (again, r is assumed to be bounded, measurable). A common method to solve linear inverse problems like the above is Galerkin's method. Long story short, this amounts to selecting an orthogonal basis (ψk) in L2(X;μ) with μ being a probability measure and then solving the said system in G={f:f=dk=1αkψk,α1,,αdR} in the sense that the solution gG is required to satisfy (PEval) in the 'weak sense': ((IγP)g,ψk=r,ψk, k=1,,d. As it turns out, there are numerically less and more stable ways of selecting the basis (ψk), though figuring out how to do this in the practical settings when IγP is unknown but can be sampled remains for future work.

Bibliography:
Szegő G. 1982 On some Hermitian forms associated with two given curves of the complex plane, Collected Papers, vol 2. (Basle: Birkhauser) p 666.

Comments

  1. nice http://readingsml.blogspot.com/2013/09/numerical-errors-perturbation-analysis.html

    ReplyDelete

Post a Comment

Popular posts from this blog

Constrained MDPs and the reward hypothesis

Useful latex/svn tools (merge, clean, svn, diff)

Keynote vs. Powerpoint vs. Beamer